Q1 List the four major categories of the benefits of multithreaded programming. Briefly explain each.
The benefits of multithreaded programming fall into the categories: responsiveness, resource sharing, economy, and utilization of multiprocessor architectures.
Responsiveness means that a multithreaded program can allow a program to run even if part of it is blocked. Resource sharing occurs when an application has several different threads of activity within the same address space. Threads share the resources of the process to which they belong. As a result, it is more economical to create new threads than new processes. Finally, a single threaded process can only execute on one processor regardless of the number of processors actually present. Multiple threads can run on multiple processors, thereby increasing efficiency.
Q2 What resources are used when a thread is created ?
How do they differ from those used when a process is created ?
Thread context must be created, including a register set, location for storage during a context switching and a local stack to record the procedure call arguments, return values and return addresses and thread local storage.
Code and data are shared with parent process or parent thread (no loading or allocation of memory necessary).
Process creation similar to thread storage (as above), with extra storage for program instructions and data.
Codes and data may be loaded for every process into the allocated memory and no sharing with other processes.
Q3 What is a thread pool and why is it used?
A thread pool is a collection of threads, created at process startup, that sit and wait for work to be allocated to them. This allows one to place a bound on the number of concurrent threads associated with a process and reduce the overhead of creating new threads and destroying them at termination.
Q4 What are the differences between user-level threads and kernel-support threads ?
User-levels thread have no kernel support, so they are very inexpensive (in terms of resources demand) to create, destroy, and switch among threads do not cause interrupt to CPU.
Kernel support thread are more expensive (in resources) because system calls are needed to create and destroy them and the kernel must schedule them to share access to CPU. They are more powerful because they are independently scheduled and block individually.