1. Life cycle of a process
-
When a process forks, a complete copy of the executing program is made into the new process.
-
This new process is a child of the parent process, and has a new process identifier (PID).
-
-
The fork() function returns the child’s PID to the parent process.
-
The fork() function returns 0 to the child process.
-
This enables the two identical processes to distinguish one another.
-
-
The parent process can either continue execution or wait for the child process to complete.
-
The child, after discovering that it is the child, replaces itself completely with another program, so that the code and address space of the original program are lost.
-
If the parent chooses to wait for the child to die, then the parent will receive the exit code of the program that the child executed.
-
To prevent the child becoming a zombie the parent should call wait on its children, either periodically or upon receiving the SIGCHLD signal, which indicates a child process has terminated.
-
One can also asynchronously wait on their children to finish, by using a signal handler for SIGCHLD, if they need to ensure everything is cleaned up.
|
1.1. copy-on-write
All processes are sharing the same set of pages and each one gets its own private copy when it wants to modify a page.
In such cases, a technique called copy-on-write (COW) is used.
With this technique, when a fork occurs, the parent process’s pages are not copied for the child process.
Instead, the pages are shared between the child and the parent process.
Whenever a process (parent or child) modifies a page, a separate copy of that particular page alone is made for that process (parent or child) which performed the modification.
This process will then use the newly copied page rather than the shared one in all future references. The other process (the one which did not modify the shared page) continues to use the original copy of the page (which is now no longer shared). This technique is called copy-on-write since the page is copied when some process writes to it.
copy-on-write is lazy copying, child process copy the page when try to write it. So basically, after a fork, almost child’s memory is shared with parent. Before any of the processes made, every child process still have some private memory, modified from parent’s or new allocating. That means even without any action the forked child process has some private memory. We can verify it with
|
This operation avoids unnecessary overhead because copying an entire address space is a very slow and inefficient operation which uses a lot of processor time and resources. |
1.2. Zombie processes
A child process always first becomes a zombie before being removed from the resource table.
When a process ends via exit, all of the memory and resources associated with it are deallocated so they can be used by other processes.
The process’s entry in the process table remains.
The parent can read the child’s exit status by executing the wait system call, whereupon the zombie is removed.
The wait call may be executed in sequential code, but it is commonly executed in a handler for the SIGCHLD signal, which the parent receives whenever a child has died.
In most cases, under normal system operation zombies are immediately waited on by their parent and then reaped by the system – processes that stay zombies for a long time are generally an error and cause a resource leak, but the only resource they occupy is the process table entry – process ID.
|
1.3. wait
The child process will not be completely removed until the parent process knows of the termination of its child process by the wait() system call.
A process (or task) may wait on another process to complete its execution.
The parent process issue a wait system call, which suspends the execution of the parent process while the child executes.
When the child process terminates, it returns an exit status to the operating system, which is then returned to the waiting parent process.
The parent process then resumes execution.
1.4. Orphan process
A child process whose parent process terminates before it does becomes an orphan process.
Such situations are typically handled with a special "root" (or "init") process, which is assigned as the new parent of a process when its parent process exits.
This special process detects when an orphan process terminates and then retrieves its exit status, allowing the system to deallocate the terminated child process.
1.5. Process States
ps aux
In the STAT column, you’ll see:
-
R: running or runnable, it is just waiting for the CPU to process it
-
S: Interruptible sleep, waiting for an event to complete, such as input from the terminal
-
D: Uninterruptible sleep, processes that cannot be killed or interrupted with a signal, usually to make them go away you have to reboot or fix the issue
-
Z: Zombie, we discussed in a previous lesson that zombies are terminated processes that are waiting to have their statuses collected
-
T: Stopped, a process that has been suspended/stopped
2. Thread
A thread is an execution unit that has its own program counter, a stack and a set of registers that reside in a process
Multiple threads can exist within one process, executing concurrently and sharing resources such as memory, while different processes do not share these resources.
The threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.
Threads in the same process share:
-
Process instructions
-
open files, data
-
signals and signal handlers
-
current working directory
-
User and group id
A thread is also called Light Weight Process (LWP). |
2.1. Implementations
-
LinuxThreads
-
The default thread implementation since Linux kernel 2.0 (introduced in 1996)
-
-
Native POSIX Thread Library (NPTL)
-
NPTL has been part of Red Hat Enterprise Linux since version 3, and in the Linux kernel since version 2.6. It is now a fully integrated part of the GNU C Library.
-
-
Next Generation POSIX Thread (NGPT)
-
A IBM developed version of POSIX thread library. The NGPT team collaborated closely with the NPTL team and combined the best features of both implementations into NPTL.
-
2.2. Threads vs. processes pros and cons
-
processes are typically independent, while threads exist as subsets of a process
-
processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources
-
processes have separate address spaces, whereas threads share their address space
-
processes interact only through system-provided inter-process communication mechanisms
-
context switching between threads in the same process typically occurs faster than context switching between processes
Advantages and disadvantages of threads vs processes include:
-
Lower resource consumption of threads: using threads, an application can operate using fewer resources than it would need when using multiple processes.
-
Simplified sharing and communication of threads: unlike processes, which require a message passing or shared memory mechanism to perform inter-process communication (IPC), threads can communicate through data, code and files they already share.
Thread crashes a process: due to threads sharing the same address space, an illegal operation performed by a thread can crash the entire process; therefore, one misbehaving thread can disrupt the processing of all the other threads in the application. |
2.2.1. When should you prefer fork() over threading and vice-verse?
When you’re doing a far more complex task than just instantiating a worker, or you want the implicit security sandboxing of separate processes.
2.2.2. If I want to call an external application as a child, then should I use fork() or threads to do it?
If the child will do an identical task to the parent, with identical code, use fork. For smaller subtasks use threads.
2.2.3. it is bad thing to call a fork() inside a thread?
it’s computationally rather expensive to duplicate a process and a lot of subthreads.
3. Process Memory
A process uses its own memory area to perform work.
-
Text Segment.
-
The Text segment (a.k.a the Instruction segment) contains the executable program code and constant data.
-
-
Data Segment
-
Heap
-
Heap is the segment from which the memory is provided. (e.g. malloc())
-
-
BSS:
-
The area where zero-initialized data is stored. All the global variable which are not initialized in the program are stored in the BSS segment.
-
-
Data:
-
The area where initialized data are stored.
-
-
-
Stack Segment
-
The stack segment is used by the process for the storage of automatic identifier, register variables, and function call information.
-
You can use:
ipcs -mp to get the process ID
and
with the command grep [shared memory segment] /proc/*/maps
ipcs shows information on the inter-process communication facilities for which the calling process has read access. By default it shows information about all three resources: shared memory segments, message queues, and semaphore arrays. |
4. Process priority (nice)
In Linux we can set guidelines for the CPU to follow when it is looking at all the tasks it has to do. These guidelines are called niceness or nice value.
The "niceness" scale goes from
-
-20 (highest priority value)
-
19 (lowest priority value)
-
default is 0
The nice priority is actually used for user programs.
Priority is all about managing processor time |
nice run a program with modified scheduling priority
chrt allows to set your scheduling policy as well as priority.
5. Scheduler
The scheduler is the Linux kernel part that decides which runnable process will be executed by the CPU next.
It handles CPU resource allocation for executing processes, and aims to maximize overall CPU utilization while also maximizing interactive performance.
The scheduler makes it possible to execute multiple programs at the same time, thus sharing the CPU with users of varying needs.
|
6. Context switching
Context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multitasking operating system.
In the Linux kernel, context switching involves
-
switching registers
-
stack pointer
-
program counter
-
flushing the translation lookaside buffer (TLB)
-
and loading the page table of the next process to run (unless the old process shares the memory with the new).
7. Interrupts
An interrupt is an event that alters the normal execution flow of a program and can be generated by hardware devices or even by the CPU itself.
Interrupts can be grouped into two categories based on the source of the interrupt:
-
synchronous, generated by executing an instruction
-
asynchronous, generated by an external event
-
For example a network card generates an interrupts to signal that a packet has arrived.
-
Information related to hard interrupts at /proc/interrupts