Function of OS
1. Process Management
The operating system helps in running many programs at the
same time. It keeps track of each running program (called a process), decides
which one should run next, and stops or starts them as needed. It makes sure
that all the programs get a fair chance to use the CPU.
2. Memory Management
The OS manages the computer's memory (RAM). It decides which
program will use how much memory and keeps track of it. When a program is
closed, it frees up the memory so that other programs can use it. This helps
the computer run smoothly without crashing.
3. File System Management
The operating system helps us to create, save, open, and
delete files. It organizes files in folders and keeps them safe. It also
controls who can open or edit a file to protect our data.
4. Device Management
The OS controls all the input and output devices like the
keyboard, mouse, printer, and monitor. It tells the devices what to do and
makes sure they work properly. It also uses special programs called drivers to
communicate with the devices.
5. User Interface
The operating system provides a way for us to use the
computer. It can be a text-based interface (like typing commands) or a
graphical interface (with icons and windows). This makes it easier for users to
give instructions and control the computer.
6. Security and Protection
The OS keeps the computer safe from unwanted access. It
allows only authorized users to log in and use the computer. It also protects
files and programs from viruses or other harmful programs.
7. Job Scheduling
When many tasks need to be done, the OS decides which one to
do first. It plans and schedules the work so that all tasks are completed in a
good order without wasting time or resources.
8. Error Detection and Handling
The operating system checks if there are any problems in the
computer, like hardware failures or program errors. If it finds any, it tries
to fix them or shows a message so the user can take action.
9. Networking
The OS helps the computer connect to other computers through
the internet or a local network. It allows sharing of files, printers, and
other resources easily between computers.
10. Resource Allocation
The operating system gives hardware resources like CPU time,
memory, or devices to different programs as needed. It makes sure all programs
get what they need and that nothing is wasted.
Development of Operating Systems
-
1940s-1950s: At first, computers had no operating system. People ran one program at a time by hand. Later, simple systems ran groups of programs called batches.
-
1960s: New systems let many people use the computer at the same time by sharing the processor. IBM made a system called OS/360 that worked on many computers.
-
1970s: Unix was made, letting many users and programs run at once. Personal computers started to appear, needing easier operating systems.
-
1980s: Microsoft made MS-DOS and early Windows with windows and icons. Apple made Mac OS with a graphical interface. Free software projects like GNU began.
-
1990s: Linux, a free system like Unix, was created. Windows 95 made computers easier to use. Mac OS X started to develop.
-
2000s: Windows XP was popular. Apple released Mac OS X. Smartphones started with Apple’s iOS and Google’s Android.
-
2010s: Windows added touch features and worked on many devices. macOS got a new name. Operating systems became more modern.
-
2020s: Windows 11 came with a new look. Operating systems now use AI, cloud, and work on many devices like phones, laptops, and tablets.
Different Types of
Operating System (Based on Processing Method)
- Batch
Operating System
A batch operating system is one where users do not interact directly with the computer. They prepare their work and give it to the computer in a group called a batch. The computer then finishes one job at a time from the batch without stopping. This system is slow and was mostly used in older computers for repeated tasks like payroll or bank processing. - Time
Sharing Operating System
A time-sharing operating system allows many users to use the computer at the same time. The system gives a small amount of time to each user and quickly switches between them. It happens so fast that everyone feels like their program is running at the same time. This type of system is useful in schools, offices, or computer labs where many people work on one computer system. - Multiprocessing
Operating System
A multiprocessing operating system uses two or more processors (CPUs) at the same time. These processors work together to handle many tasks at once. This makes the system much faster and is useful in powerful computers like servers or systems used for scientific research and big companies. - Multitasking
Operating System
A multitasking operating system allows one person to do several things at once on a computer. For example, you can play music, open a website, and type a document at the same time. The system quickly switches between these programs so everything runs smoothly. This type of system is used in personal computers like Windows and macOS. - Real-Time
Operating System (RTOS)
A real-time operating system is used where a fast and quick response is very important. It gives an answer as soon as the input is given. These systems are used in places like hospitals, robots, airplanes, or traffic systems where even a small delay can be dangerous. Some real-time systems allow no delay at all, while others allow very small delays. - Distributed
Operating System
A distributed operating system connects many computers together through a network and makes them work like a single computer. These connected computers share their files, memory, and tasks. The user can work from any of the connected computers without knowing where the task is being done. This system is used in cloud computing and large companies like Google and Facebook.
Classification of OS on the basis of Users
1. Single-User Operating System
A Single-User Operating System is designed for
only one user at a time.
This means only one person can use the computer’s resources like memory, CPU,
and storage at a time.
However, the user can still open and use many programs at once, like browsing
the internet, playing music, and writing documents.
Single-user operating systems are mostly used in home
computers and laptops.
Examples of single-user operating systems are MS-DOS, Windows 95, Windows 98, and early versions of Mac OS.
2. Multi-User Operating System
A Multi-User Operating System allows multiple
users to use the computer at the same time
or at different times.
Each user can log in separately, and the operating system keeps their data and
files safe and private.
The OS manages the system resources and gives each user a fair share of the
CPU, memory, and other resources.
This type of OS is used in places like offices, schools,
universities, and servers, where many people use the same
computer system.
Examples of multi-user operating systems are UNIX, Linux, Windows Server, and mainframe operating systems.
Classification of OS on the basis of User Interface
Graphical User Interface (GUI)
A Graphical User Interface (GUI) is a type of interface where users interact with the computer using graphical elements like windows, icons, buttons, and menus. It allows people to use the computer easily by clicking with a mouse or tapping with a finger instead of typing commands.
GUI is very simple and user-friendly, which is why it is used in most modern computers and mobile devices. For example, in Microsoft Windows, you can open files, play music, or browse the internet just by clicking on icons. Similarly, in macOS or Ubuntu Desktop, users can drag and drop files, click on menus, and use graphical applications.
GUI is best for beginners because it is easy to understand and doesn’t require remembering any commands. However, it uses more memory and processing power, so it can be a little slower on old computers. Examples: Microsoft Windows, macOS, Ubuntu Desktop, Android, iOS
Command-Line Interface (CLI)
A Command-Line Interface (CLI) is a type of interface where users must type specific text commands to interact with the computer. There are no graphics or icons—only a black screen with a place to type. CLI is very powerful and fast if the user knows the right commands.
It is mainly used by system administrators, programmers, and advanced users who want more control over the system. For example, in MS-DOS, users can type dir
to view files in a folder, or in Linux Terminal, you can type ls
to list files or sudo apt update
to update the system. CLI is lightweight and uses very little system resources, but it is difficult for beginners because it requires remembering exact commands and syntax. Examples: MS-DOS, Linux Terminal, Unix Shell (like Bash)
Unit- 2 Process and Process Scheduling
2.1 Introduce Process,
Program and Process Life Cycle
Process
Process is something that
is currently under execution. So, an active program can be called a Process. Examples:
● Opening a web browser to search something on the internet — the browser
becomes a process.
●
Launching
a music player to enjoy your
favorite tunes — the music player is also a process.
In computing, a process is
the instance of a computer program that is being executed by one or many
threads. It contains the program code and its activity.
Modern operating systems
support multithreading, meaning a
process can have multiple threads running concurrently.
A Process has various
attributes associated with it. Some of the attributes of a Process are:
●
Process Id: Every process will be given a
unique id that identifies the process from the other processes.
●
Process state: Each and every process has some
states associated with it at a particular instant of time. This is denoted by
process state. It can be ready, waiting, running, etc.
●
CPU scheduling information: Each process is executed by using
some process scheduling algorithms like FCSF, Round-Robin, SJF, etc.
●
I/O information: Each process needs some I/O
devices for their execution. So, the information about device allocated and
device need is crucial.
Program
A program is a piece of code which may be a single line or multiple
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that
performs a specific task when executed by a computer. When we compare a program
with a process, we can conclude that a process is a dynamic instance of a
computer program.
Process Life Cycle
The Process Life Cycle refers to the sequence of states that
a process goes through from its creation to termination during its lifetime.
A process state represents
the current status of a process — or
what the process is doing at a particular
moment during its execution.
A process can have one of the following five
states at a time.
S.N. |
State & Description |
1 |
New This is the initial state when a process is first
started/created.( |
2 |
Ready The process is waiting to be assigned to a processor.
Ready processes are waiting to have the processor allocated to them by the
operating system so that they can run. Process may come into this state
after Start(new) state
or while running it by but interrupted by the scheduler to assign CPU to
some other process. ( |
3 |
Running Once the process has been assigned to a processor by the
OS scheduler, the process state is set to running and the processor executes
its instructions. |
4 |
Waiting Process moves into the waiting state if it needs to wait
for a resource, such as waiting for user input, or waiting for a file to
become available.( |
5 |
Terminated
or Exit Once the process finishes its execution, or it is
terminated by the operating system, it is moved to the terminated state
where it waits to be removed from main memory.( |
2.2 Process Control Block (PCB)
S.N. |
Information & Description |
1 |
Process
State The current state of the process i.e., whether it is
ready, running, waiting, or whatever. |
2 |
Process
privileges This is required to allow/disallow access to system
resources. |
3 |
Process
ID Unique identification for each of the process in the
operating system. |
4 |
Pointer A pointer to parent process. |
5 |
Program
Counter Program Counter is a pointer to the address of the next
instruction to be executed for this process. |
6 |
CPU
registers Various CPU registers where process need to be stored for
execution for running state. |
7 |
CPU
Scheduling Information Process priority and other scheduling information which
is required to schedule the process. |
8 |
Memory
management information This includes the information of page table, memory
limits, Segment table depending on memory used by the operating system. |
9 |
Accounting
information This includes the amount of CPU used for process
execution, time limits, execution ID etc. |
10 |
IO
status information This includes a list of I/O devices allocated to the
process. |
The PCB is maintained for a process throughout its lifetime,
and is deleted once the process terminates.
Introduction
to Process Scheduling
Process
Scheduling is an
OS task that schedules processes of different states like ready, waiting, and
running. The process
scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the
basis of a particular strategy.
Process scheduling is an essential part of
a Multiprogramming operating systems. Such operating systems allow more than
one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Process
scheduling allows OS to allocate a time interval of CPU execution for each
process. Another important reason for using a process scheduling system is that
it keeps the CPU busy all the time. This allows you to get the minimum response
time for programs.
Process
Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The
OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the
state of a process is changed, its PCB is unlinked from its current queue and
moved to its new state queue.
●
Job queue
●
Ready queue
●
Device queues
Types of Process Scheduling
Process Scheduling handles the selection of a process for the
processor on the basis of a scheduling algorithm and also the removal of a
process from the processor. It is an important part of multiprogramming
operating system.
1. Short-Term Scheduler (CPU
Scheduler)
The short-term scheduler is the part of the operating system that decides which process (or program) should run on the CPU next. It looks at all the processes that are ready and waiting to run and quickly picks one to give to the CPU. This scheduler works very fast and runs very often, usually every few milliseconds, because the CPU keeps switching between tasks. Its main goal is to make sure the CPU is always doing something useful and doesn't stay idle. It chooses tasks that are ready and puts them on the CPU for execution. This is also called CPU scheduling.
2. Medium-Term Scheduler
The medium-term scheduler helps manage system memory when there are too many processes running. Sometimes, the memory gets full, and the system becomes slow. In that case, this scheduler pauses some of the running processes and moves them out of the main memory to the hard disk (this is called swapping). These paused processes can be resumed later when there is enough space. This helps free up memory and keeps the system running smoothly. It doesn’t run as often as the short-term scheduler, but it plays an important role in balancing system load and memory usage.
3. Long-Term Scheduler (Job Scheduler)
The long-term scheduler decides which new jobs (or programs) are allowed to enter the system for processing. When many users or programs request to run, this scheduler picks a few jobs and loads them into memory to be processed. It runs only once in a while—when new jobs arrive or when the system needs more work to do. Its main goal is to control the number of processes in the system and make sure it’s not overloaded. It also tries to keep a good mix of processes—some that use the CPU a lot and some that wait for input/output—so that resources are used efficiently.
2.6 Preemptive Vs
Non-Preemptive Scheduling
Preemptive Scheduling is a CPU scheduling technique that
works by dividing time slots of CPU to a given process. The time slot given
might be able to complete the whole process or might not be able to it. When
the burst time of the process is greater than CPU cycle, it is placed back into
the ready queue and will execute in the next chance. This scheduling is used
when the process switch to ready state.
Algorithms that are backed by preemptive Scheduling are
round-robin (RR), priority, SRTF (shortest remaining time first)
Non-preemptive Scheduling is a CPU scheduling technique the
process takes the resource (CPU time) and holds it till the process gets
terminated or is pushed to the waiting state. No process is interrupted until
it is completed, and after that processor switches to another process.
Algorithms that are based on non-preemptive Scheduling are
non-preemptive priority, and shortest Job first.
Preemptive Vs Non-Preemptive
Scheduling
Preemptive Scheduling |
Non-Preemptive Scheduling |
Resources are allocated according to the cycles for
a limited time. |
Resources are used and then held by the process
until it gets terminated. |
The process can be interrupted, even before the
completion. |
The process is not interrupted until its life cycle
is complete. |
Starvation may be caused, due to the insertion of
priority process in the queue. |
Starvation can occur when a process with large
burst time occupies the system. |
Maintaining queue and remaining time needs storage
overhead. |
No such overheads are required. |
A thread is smallest unit of execution in a process. A
thread has its own program counter that keeps track
of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few
information like code segment, data segment and open files. When one thread
alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a
way to improve application performance through parallelism. Threads represent a
software approach to improving performance of operating system by reducing the
overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process
and no thread can exist outside a process. Each thread represents a separate
flow of control. Threads have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel
execution of applications on shared memory multiprocessors. The following
figure shows the working of a single-threaded and a multithreaded process.
Difference
between Process and Thread
S.N. |
Process |
Thread |
1 |
Process is heavy weight or resource intensive. |
Thread is light weight, taking lesser resources
than a process. |
2 |
Process switching needs interaction with operating
system. |
Thread switching does not need to interact with
operating system. |
3 |
In multiple processing environments, each process
executes the same code but has its own memory and file resources. |
All threads can share same set of open files, child
processes. |
4 |
If one process is blocked, then no other process
can execute until the first process is unblocked. |
While one thread is blocked and waiting, a second
thread in the same task can run. |
5 |
Multiple processes without using threads use more
resources. |
Multiple threaded processes use fewer resources. |
6 |
In multiple processes each process operates
independently of the others. |
One thread can read, write or change another
thread's data. |
Advantages of Thread
● Threads minimize the context switching time.
● Use of threads provides concurrency within a process.
● Efficient communication.
● It is more economical to create and context switch threads.
● Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Example of Thread:
Word Processor:
A programmer wish to type the text in word processor. Then the programmer opens a file in a word processor and typing the text (It is a thread), the text is automatically formatting (It is another thread). The text automatically specifies the spelling mistakes (It is another thread), and the file is automatically saved in the disk (It is another thread).
2.7 Life Cycle of Thread:
1. Born State: A thread that has just created.
2. Ready State: The thread is
waiting for the processor (CPU).
3. Running: The System assigns
the processor to the thread means that the thread is being executed.
4. Blocked State: The thread is
waiting for an event to occur or waiting for an I/O device.
5. Sleep: A sleeping thread
becomes ready after the designated sleep time expires.
6. Dead: The execution of the
thread is finished.
Life
Cycle of a Threat
A threat in an OS is any potential danger (like malware or hackers) that can harm a system by exploiting vulnerabilities. The threat life cycle involves several key stages from detection to recovery:
Life Cycle of a Threat steps
1. Detection
A threat is first noticed by the system using tools like antivirus, firewall, or monitoring software.
2. Exploitation
The attacker tries to use a weakness in the system to get access or cause harm.
3. Propagation (Spread)
If the threat is able to, it spreads to other systems through networks or by tricking users.
4. Execution (Attack Happens)
The threat starts doing damage such as stealing data, locking files, or crashing the system.
5. Detection of Impact
The system or users realize something is wrong, like errors, slow speed, or strange activity.
6. Containment
Steps are taken quickly to stop the threat from spreading, such as disconnecting from the internet or blocking access.
7. Eradication
The threat is fully removed by deleting harmful files and fixing the security holes.
8. Recovery
The system is brought back to normal, files are restored, and it’s checked to make sure everything is safe.
9. Post-Incident Analysis
The incident is studied to find out how it happened and to improve the system’s security for the future.
● Types of CPU scheduling Algorithm
●
There
are mainly six types of process scheduling algorithms
●
1. First Come First Serve
(FCFS)
●
2. Shortest-Job-First (SJF)
Scheduling
●
3. Shortest Remaining Time
●
4. Priority Scheduling
●
5. Round Robin
Scheduling
●
6. Multilevel Queue Scheduling
● 1. First Come First Serve (FCFS)
·
This is the oldest
and simplest scheduling technique.
·
The process that arrives first gets the CPU first.
·
It’s like standing
in a queue — the first person gets served first.
·
Non-preemptive
(once a process starts, it runs till it finishes).
Key Characteristics:
·
Easy to implement.
·
Fair
in order, but not always efficient.
·
Can lead to long
waiting times, especially if a long process comes first.
· Convoy Effect: One long job can delay many short ones.
Formulae:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
If the CPU scheduling policy is FCFS, calculate the average waiting time and average turn around time.
2. Shortest Job First (SJF)
·
Picks the process
with the shortest burst time (execution time).
·
Like finishing smaller tasks first to
reduce total waiting.
·
It’s efficient but needs to know how long each
task will take.
Non-preemptive (process
runs until it finishes).
Key
Characteristics:
·
Minimizes average
waiting time.
·
Can lead to starvation (long jobs
wait too long).
· Needs accurate burst time in advance.
3. Shortest Remaining Time First (SRTF)
·
This is the preemptive version of
SJF.
·
Always runs the process with the shortest remaining burst time.
·
A new
process with a shorter time can interrupt the current
one.
·
Preemptive
(switches between processes if needed).
Key
Characteristics:
·
Gives better turnaround and response time.
·
Can also cause starvation of long
processes.
· Needs to predict or know the burst time ahead of time
Comments
Post a Comment