ASSIGNMENT-1 CHAPTER 1 1.12) 1. Stealing or copying a user's files: writing over another program's (belonging to another user or OS) area in memory; 2. Using system resources (CPU, disk space) without proper accounting; causing the printer to mix output by sending data while some other user's file is printing. Probably not, because any protection scheme invented by a human can also be broken -- and more the complex more is difficult it is to be confident of its correct implementation. 1.15) Special hardware can differentiate the multiple processors, or the software can be written to allow only one boss and multiple workers. For instance, …show more content…
Increased through put: By increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors. Similarly, N programmers working closely together do not produce N times the amount of work a single programmer would produce. 2. Economy of scale: Multiprocessor systems can cost less than equivalent multiple single-processor systems, because they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processors share them than to have many computers with local disks and many copies of the data. 3. Increased reliability. If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. If we have ten processors and one fails, then each of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower, rather than failing
In Symmetric Multiprocessor systems performance is high since each processor will have its own CPU, registers and cache. The processes will be allocated to each processor separately and each process runs on its own. So, if there are 3 processes then there will be 3 CPU’s which runs simultaneously and hence all the 3 processes will be completed at the same time. But in Asymmetric Multiprocessor systems performance is not high as symmetric because the operating system can process only one request at a time. Only after the process allocated
A good processor will enable faster performance from the computer and will allow power efficiency when working with graphics. The CPU is measured in gigahertz per second or GHz. A better CPU will allow programs like Photoshop to run faster and better. A good processor allows the computer to run the software and work out the calculations to manipulate and edit graphics.
Symmetric multiprocessing sees all processors as reciprocals, and I/O can be arranged on any CPU. Asymmetric multiprocessing has one master CPU and whatever is left of are slaves.
Mainframes and Personal computers were drastically different when they were first introduced. The mainframe took up buildings and the personal computer was only an interface to a mainframe. Today, their similarities are growing. Eventually we will not be able to tell the difference. Has cost comes down for mainframes they to will become personal computers or an add-on to a personal computer for the price of a hard drive today. Back in the day, mainframes were huge so I will conclude by saying it must be a
This efficiency can be achieved by Virtualization. [1][2][3] By virtualize, we mean that a single physical resource can be exposed as multiple virtual resources or multiple physical resources can be exposed as a single virtual resource. Resource can be anything like server, an OS, an application, or a storage device. The main aim of virtualization is to efficiently utilize the limited IT resources by making use of many idle resources. [4]
In a single-processor system, only one process can run at a time; any others must wait until the CPU is free and can be rescheduled. The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The idea is relatively simple. A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. With multiprogramming, we try to use this time productively. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Every time one process has to wait, another process can take over use of the CPU. Scheduling of this kind is a fundamental operating-system function [18].
Manufactures were concerned how they could resolve the problem of the transistors over heating as more power generated would increase the temperature of the chips. To resolve the problem the design engineers changed the design of the microprocessors and used the parallelism i.e increasing the number of cores in a chip.
This technology works by measuring the percentage of utilization on each Central Processing Unit. The advantage for HP customers is that they will only pay for processing they are utilizing. They also have the option to use additional processors, therefore processing is not limited. In addition, different versions of the same products priced differently with respect to the configuration of components.
In 1978, Intel came out with the 8086 chip. This chip had 29,000 transistors, 20 address lines, and could “talk with up to 1MB of RAM ... designers never suspected anyone would ever need more than 1 MB of RAM” (PCMech, 2001, para. 4). Intel continued to produce its 8000 series chips, increasing the speed and the memory each time. In 1982, the 286 was the first processor to have protected mode, which was later used by Windows and other operating systems to allow programs to run separately but concurrently (PCMech, 2001, para. 8). In the late 1980s, Intel came out with the 386. The 386 was a huge step forward, as it had 275,000 transistors, came in a 33 MHz version, worked with 4 GB of RAM, and could support a virtual memory of 64 TB (PCMech, 2001, para. 9). In 2002, hyper-threading came out in the Pentium 4 HT, which meant that the CPU could be fooled into thinking it had two CPUs for each one that it actually had. Using hyper-threading along with additional cores has enhanced performance and speed because some cores are utilized for programs while others perform background jobs (Hoffman, 2014, paras. 6,7). Another way CPUs have been able to increase speed is by raising the number of cores per CPU socket, and utilizing an I/O Hub “called QuickPath Interconnect” (Santana, 2014, p. 565). The use of multiprocessing has been the key for the development of today’s CPUs.
The most common mechanism for implementing multiprogramming was the introduction of the _d___ concept, which is when the CPU is notified of events needing operating systems services.
This eliminates the very need of workers to regularly keep an eye on machines which results in considerable increase in productivity because one operator can operate number of machines at the same time frequently termed as multi process handling.
Resource management in an operating systems environment is the controlled allocation and de-allocation of system resources, for example; processor cores, memory pages, bandwidth and many others. These resources tend to be shared by various programs running on the machine which require them in order for the program itself to run accurately, and also to ensure that the machine can allocate just enough resources so that everything can run smoothly. Two methods of multiplexing (sharing) these resources; time multiplexing, and space multiplexing. Using a computer which has several users, means that the importance and needs of managing and protecting resources such as memory, and processing cores are even greater than otherwise. This is because users would have to share files with each other which could potentially end in disaster for end user.
By implementing multi-core processors, we can dramatically increase a computer’s capabilities and computing resources, providing better responsiveness, improving multithreaded throughput, and delivering the advantages of parallel computing to properly thread mainstream applications (Ramanathan). When multi-core processing was just beginning there were already immediate benefits. One immediate benefit was that multi-core processors improved an operating system’s ability to multitask applications. For instance, say you have a virus scan running in the background while you’re working on your word-processing application (Ramanathan). Another major multi-core benefit comes from individual applications optimized for multi-core processors (Ramanathan). These applications, when properly programmed, can split a task into multiple smaller tasks and run them in separate threads
Real time: Responds to input instantly. General-purpose operating systems, such as DOS and UNIX, are not real-time.
Multiprocessing is a mode of operation in which two or more processors in a computer simultaneously process two or more different portions of the same program or set of instructions. Multiprocessing refers to a computer system’s ability to support more than one process or program at the same time. Multiprocessing operating systems enable several programs to run parallel (Hosch). It is typically carried out by two or more microprocessors, each of which is in a central processing unit (CPU) on a single tiny chip. Supercomputers typically combine thousands of such microprocessors to interpret and execute instructions. UNIX is one of the most broadly used multiprocessing systems used today; others include OS/2