HOMEWORK-1
NAME: VARUN REDDY MUTHYALA
700#: 700639203
1.13 The issue of resource utilization shows up in different forms in different types of operating systems. List what resources must be managed carefully in the following settings:
a. Mainframe or minicomputer systems b. Workstations connected to servers
c. Mobile computers
Answer:-
In centralized computer or minicomputer frameworks the assets like memory, stockpiling of information and system data transfer capacity ought to be managed carefully.
In workstations associated with servers the assets like memory and processor ought to be managed carefully.
In portable PCs the assets like utilization of force and memory assets ought to be managed carefully.
1.15 Describe the differences
…show more content…
Increase in rate for executing a system. 2. Higher unwavering quality: if one processor fizzles different processors will accept the employment of fizzled processor 3. Increased throughput 4. Economy of scale
In spite of the fact that multiprocessors have numerous favorable position it additionally have some detriment like complex in structure when contrasted with uni-processor framework.
1.17 Consider a computing clusters consisting of two nodes running a database. Describe two ways in which the cluster software can manage access to the data on the disk. Discuss the benefits and disadvantages of each.
Answer:-
There are two ways in which the cluster programming can oversee access to the information on the disk.
1. Parallel clustering 2. Asymmetric clustering
Asymmetric clustering: in this, one host will run the database and other host continues checking it. In the event that host getting to the database application comes up short then different begins working. This suits for giving excess. In any case, it can't utilize the potential work of both
Ensure that the uptime, performance, resources, and security of the computers he or she manages to meet the needs of the users, without exceeding the budget responsible for the integrity of the data and the efficiency and performance of the system
Improved performance: A distributed DBMS fragments the database to keep data closer to where it is needed most. This helps to avoid unnecessary data transfer.
MapReduce Parallel programming model if we ever get a chance. In Hadoop, there are two nodes in the cluster when using the algorithm, Master node and Slave node. Master node runs Namenode, Datanode, Jobtracker and Task tracker processes. Slave node runs the Datanode and Task tracker processes. Namenode manages partitioning of input dataset into blocks and on which node it has to store. Lastly, there are two core components of Hadoop: HDFS layer and MapReduce layer. The MapReduce layer read from and write into HDFS storage and processes data in parallel.
The excessive use of computers has drastically changed the lives of many users. As a multifaceted tool, the computer is used for tasks to include research, homework, business related
Processor: Processor is very important hardware in a computer. It allows the operating system to run in a computer. It gives out the command to other programs
As technology advances, the processes that we use to manage that technology become more demanding, creating the need for new software and efficient processors. “The central processing unit or (CPU) is the heart of your computer and is used to run the operating system as well as all the programs.” (Chris Hoffman, CPU Basics: multiple CPU’s, cores and hyper threading explained.) With so much power in a single chip, we have created a powerful piece of technology that can be placed virtually anywhere.
A multiprogramming is used to keep the CPU busy most of the time, i.e. increase utilization of system CPU. This is due to the fact that multiprogramming is performed by job scheduling, i.e. with the subset of all the jobs kept in memory. Thus when CPU has to wait, then OS switches to another job for keeping CPU busy.
Each cluster is independent of other clusters. They function using their own hardware interfaces. A replication service is required to Maintain data consistency across these diverse environments. Distribution reduces the network costs for query access, and it improves application availability and consistency. In this we first discuss about cluster architecture, replication, types in the next section, Implementation of replication, Tools available in next section, problems with replication, how to overcome those problems and ensure consistency.
In its broadest use, the term distributed computing alludes to the conveyance of adaptable IT assets over the Internet, rather than facilitating and working those assets by regional standards, for example, on a school alternately college system. Those assets can incorporate
Allocation of resources-They are grouped to serve a large number of simultaneous users. Computing resources are joint together to serve numerous
Today’s data center operation is changing and although virtualization has brought many benefits, hardware infrastructure is out of control resulting in operational complexities and inefficiencies. Further complexity is introduced as new applications are developed and the user community becomes more mobile. How can users reduce complexity and simplify infrastructure management?
A popular way of addressing this problem is server consolidation. It is an optimization approach that leverages technique called operating
A Spark cluster can have several types of cluster managers, which allocate resources across applications. When the SparkContext is connected with the cluster manager, Spark acquires executors on worker nodes of the cluster. Here, an executor is a process that performs computations and storage operations, for an application. Next, it sends application code to the executors. Finally, SparkContext
The new generation of application platforms has changed not only the database production but also forever the enterprise software production, by combining an in-memory database machine, data processing and application server in a single server. Providing the capabilities of a huge data center in that single server. The speed, major efficiency improvements, low cost and directness are features that will build wide implementations. They solve the need for small applications to share data in a shared screen. This is the time for the developing apps and cloud storage to get together and provide businesses with the freedom to use the best resources out there.
Multiprocessor systems have been used for many years, and high-end programmers are familiar with the techniques to exploit multiprocessors for higher performance levels.