Feeds:
Posts
Comments

Archive for January, 2011

By Kato Mivule

Operating Systems

Non contiguous memory allocation methodology does require that a file be termed at the start. The file grows as needed with time. A major advantage is the reduced waste of disk space and flexibility when it comes to memory allocation. The Operating System will allocation memory to the file when needed. [1]

Non contiguous memory allocation, offers the following advantages over contiguous memory allocation: [2]

  • Allows the interdependence of code and data among processes.
  • External fragmentation is none existent with non contiguous memory allocation.
  • Virtual memory allocation is strongly supported in non contiguous memory allocation.

Non contiguous memory allocation methods include Paging and Segmentation.

Paging

Paging is a non contiguous memory allocation method in which physical memory is divided into fixed sized blocks called frames of size in the power of 2, ranging from 512 to 8192 bytes. Logical memory is also divided into same size blocks called pages. For a program of size n pages to be executed, n free frames are needed to load the program.[3]

Some of the advantages and disadvantages of paging as noted by Dhotre include the following: [4]

  • On advantages:
  • Paging Eliminates Fragmentation
  • Multiprogramming is supported
  • Overheads that come with compaction during relocation are eliminated
  • Some disadvantages that include:
  • Paging increases the price of computer hardware, as page addresses are mapped to hardware
  • Memory is forced to store variables like page tables
  • Some memory space stays unused when available blocks are not sufficient for address space for jobs to run

Segmentation

Segmentation is a non contiguous memory allocation technique that supports a user view of memory. A program is seen as a collection of segments such as main program, procedures, functions, methods, stack, objects, etc. [3]

Some of the advantages and disadvantages of segmentation as noted by Godse et al include the following: [5]

  • On advantages:
  • Fragmentation is eliminated in Segmentation memory allocation
  • Segmentation fully supports virtual memory
  • Dynamic memory segment growth is fully supported
  • Segmentation supports Dynamic Linking
  • Segmentation allows the user to view memory in a logical sense.
  • On the disadvantages of segmentation:
  • Main memory will always limit the size of segmentation, that is, segmentation is bound by the size limit of memory
  • It is difficult to manage segments on secondary storage
  • Segmentation is slower than paging
  • Segmentation falls victim to external fragmentation even though it eliminates internal fragmentation

Sources

[1] Achyut S Godbole, Achyut, “Operating systems” Tata McGraw-Hill, 2005, ISBN 007059113X, 9780070591134, Page 110

[2] P.S. Gill, “Operating Systems Concepts” Laxmi Publications, 2006, ISBN 8170089131, 9788170089131, Page 112-119

[3] Abraham Silberschatz, Peter B. Galvin, Greg Gagne, Operating system concepts, Edition 9, illustrated, J. Wiley & Sons, 2009, ISBN 978-0-470-12872-5, Page 357-408.

[4] I.A.Dhotre, “Operating Systems”, Technical Publications, 2009, ISBN 8184311699, 9788184311693, Page 28

[5] A.P.Godse, D.A.Godse, “Computer Organization”, Technical Publications, 2010, ISBN 8184317727, 9788184317725, Page 44-43

Advertisements

Read Full Post »

By Kato Mivule

Operating Systems

According to Silberschatz et al, virtual memory is a method that allows the running of processes that are not entirely in memory; virtual memory involves the separation of logical memory from physical memory as viewed by users. Such a separation of logical memory from physical memory allows the operating system to provide large virtual memory even when actually little physical memory is available. [1]

Microsoft describes virtual memory as virtual memory functions that allow a process to manipulate and determine the status of pages in its virtual address space. [2]

One of the ways in which virtual memory allocation is implemented is Demand Paging where pages are only loaded to memory when needed during program execution time. [1] [3]

Another virtual memory allocation technique is Copy-on-Write that allows the parent and child process to initially share the same pages. If a number of processes need access to the same page, then those processes are given pointers to that same needed page and it is marked as ‘copy-on-write’, allowing the processes to access the shared data page. [1] [4]

Yet another virtual memory implementation technique is Page Replacement in which free frames that are not utilized are sought out in physical memory and utilized. If no free frame is found, then the operating system will search for the least utilized frame also called the victim frame which is then freed and a page is allocated to the freed frame. [1] [5]

According to Silberschatz et al, Main Memory is both the physical memory and logical memory located inside the computer but separate from the Storage Memory like hard-drives and external drives. Main Memory is referred to as RAM or Random Access Memory due to its volatile nature – that is, it losses contents once power is switched off. [1] [6]

Memory allocation in Main Memory involves a number of techniques that include address binding in which the logical addresses are mapped to physical addresses when processes are loaded into main memory at compile time, load time, and execution time. [1] [7]

Another technique of memory allocation in Main Memory is dynamic loading and dynamic linking in which routines are not loaded until called for the former and linking is only done only when required for the later. The routines are kept on the hard disk in a relocatable load format and only called when needed for dynamic loading. [1] [8]

Swapping is another technique that involves keeping a process out of memory to the hard disk and then brought back to main memory for execution at a later time. [9] Paging is another memory management technique that permits the physical address space of a process to be non-contiguous, avoiding fragmentation; physical space on main memory is allocated to a process in form of a fixed magnitude called frames. [1] [10] Segmentation is another type of memory allocation technique in main memory that involves diving memory into small segments as defined by the user. [11]


Sources

[1] Abraham Silberschatz, Peter B. Galvin, Greg Gagne, Operating system concepts, Edition 9, illustrated, J. Wiley & Sons, 2009, ISBN 978-0-470-12872-5, Page 357-408.

[2] “Virtual Memory Functions (Windows).” [Online]. Available: http://msdn.microsoft.com/en-us/library/aa366916(VS.85).aspx. [Accessed: 28-Oct-2010].

[3] Sibsankar Haldar, Alex A. Aravind, “Operating Systems”, Pearson Education India, 2009, ISBN 8131730220, 9788131730225, Page 254

[4] Craig Partridge, “Gigabit networking”, Addison-Wesley, 1994, ISBN 0201563339, 9780201563337, Page 205

[5] D. M. Dhamdhere, “Operating Systems: A Concept-based Approach,2E”, Tata McGraw-Hill, 2006, ISBN 0070611947, 9780070611948, Page 266

[6] “main memory (computer technology) — Britannica Online Encyclopedia.” [Online]. Available: http://www.britannica.com/EBchecked/topic/358612/main-memory. [Accessed: 28-Oct-2010].

[7] Nell Dale, John Lewis, “Computer Science Illuminated”, Jones & Bartlett Learning, 2009, ISBN 0763776467, 9780763776466, Page 340

[8] Er. Vivek Sharma, Er. Manish Varshney, Shantanu Sharma, “Design and Implementation of Operating System”, Laxmi Publications, Ltd., 2010, ISBN 9380386419, 9789380386416, Page 281-282

[9] Balagurusamy, “Fund Of Computers”, Tata McGraw-Hill, ISBN 0070141606, 9780070141605, Page 249

[10] Linda Null, Julia Lobur, “The essentials of computer organization and architecture”, Jones & Bartlett Learning, 2006, ISBN 0763737690, 9780763737696, Page 303

[11] Mohamed Rafiquzzaman, “Microprocessors and microcomputer-based system design”, CRC Press, 1995, ISBN 0849344751, 9780849344756, Page 19

Read Full Post »

By Kato Mivule

Operating Systems

Outline

In this article we take a look at some of the Memory Management Strategies discussed by Silberschatz et al and how they could be beneficial to memory management in Web Servers.

Introduction

Concrete definitions of ‘Web Server’ is hard to come by but PC Magazine states that a web server is a computer that runs a website using http protocol. [1] Web Servers are special computer programs that allow simultaneous access of data across the internet. Therefore a web server would require robust memory and storage resources.

This means that you could have thousands of entities that include programs and people requesting access to the web server, thus consuming a number of resources. At the same time, thousands of entities could be requesting storage access on the web server and retrieval of stored items. A web server administrator would do well to consider the following Management Strategies:

Swapping: As articulated by Silberschatz et al, a process must be in memory to be executed. The same process can be swapped in and out temporarily out of memory to a backing store then resumed for execution. [2] In the case of web servers, some processes can be swapped so as to give room for real time or critical processes that don’t take long to execute. In such circumstances, large backing store capacities would be required to temporarily store swapped processes.

Contiguous Memory Allocation: Memory is divided in two partitions, for the resident operating system and one for the user processes as described by Silberschatz et al. The operating system is then placed in either the lower or higher memory.[2] This would be beneficial for web servers as the web server operating system is given its won allocation to run efficiently while user processes can then be allocated the remaining part of memory. Since web servers have to process thousands of transactions, a Swapping – Contiguous memory allocation hybrid would be employed so as to further optimize user processes.

Paging: According to Silberschatz et al, in this memory management strategy, the physical address space of a process is allowed to be non contagious, avoiding fragmentation, and also using the backing store. However, backing stores are vulnerable to fragmentation and thus making swapping and memory allocation slower. [2] With paging memory is broken into blocks called frames and pages. When a process is going to be executed, its pages are preloaded into any available frames from their storage. [2] [3] Therefore paging would be a very good optimization memory management strategy to implement with web servers by bypassing fragmentation and avoiding dependence on backing store.

Segmentation: Segmentation divides a program or process into smaller blocks called segments.[4] Silberschatz et al describes segmentation as a memory management scheme that supports the user view of memory that sees memory as a collection of variable sized segments. [2] According to Dhotre, segmentation eliminates fragmentation, provides virtual memory, makes memory visible, and allows dynamic segment growth, though setbacks include restricted segment size by main memory. [4] Therefore the advantages of segmentation would still be beneficial for optimizing memory utilization in web servers as virtual memory would be readily available. The overheads of restricted memory size by main memory can be over come as the price of main memory has decreased while the amount of memory size has increased.

Conclusion

Therefore I would suggest a hybrid approach in management of memory when it comes to web servers. The hybrid approach would include Swapping, Contiguous Memory Allocation, Paging, and Segmentation strategies. The definite ‘game changer’ in this hybrid strategy is the economics involved. The price of memory continues to decrease while the size of memory is increasing significantly. Therefore more studies are needed to show the impact of large and available memory in regards to this hybrid approach.

Sources and References

[1] “Web server Definition from PC Magazine Encyclopedia.” [Online]. Available: http://www.pcmag.com/encyclopedia_term/0,2542,t=Web+server&i=54342,00.asp. [Accessed: 12-Oct-2010].

[2] Abraham Silberschatz, Peter B. Galvin, Greg Gagne, Operating system concepts, Edition 9, illustrated, J. Wiley & Sons, 2009, ISBN 978-0-470-12872-5, Page 315-350.

[3] Daniel Pierre Bovet, Marco Cesatí, Understanding the Linux Kernel, O’Reilly Series, Edition 3 illustrated, O’Reilly Media, Inc., 2005, ISBN 0596005652, 9780596005658, Page 46.

[4] I.A.Dhotre, Operating Systems, Technical Publications, 2009, ISBN 8184311699, 9788184311693, Page 28

Read Full Post »

By Kato Mivule

Operating Systems

Outline

The Two Phase Protocol (2PL) protocol has found wide spread implementation in distributed database systems and research continues on how better to detect and prevent deadlocks in the 2PL protocol in distributed database systems. In this discussion we take a review of the 2PL protocol, definitions of terms involved to 2PL, the deadlocks, and some suggestions on how to avoid deadlocks in the 2PL Protocol.

Introduction

Even though the 2PL Protocol finds wide use in distributed database systems, the 2PL Protocol is essential in operating systems and to better grasp the 2PL protocol, a look at Serializability and Locking Protocol is an important first step, as these concepts are foundational to understanding the 2PL Protocol.

Serializability

Serializability is a situation where multiple transactions are active simultaneously, and each transaction is atomic, the concurrent execution of transactions have to be equivalent in order to execute the processes serial. [1] [2]To ensure serializability, each data item has to be associated with a lock and that each lock follow a locking protocol that sets rules on how locks are acquired and released. [1] [3]

Locking protocol

Locking Protocols by and large align admission to mutual shared data using locks which can be turned on or off when a transaction acquires or releases the shared data. [4] Silberschatz et al describe the two conditions of locking protocol as follows, with each data transaction forced to ask for lock in a suitable mode on a data item Q, as per type of activity it will do:[1]

  • Shared: If a transaction Ti has acquired a shared-condition lock on a data item Qi, then Ti can read the item but cannot write Q
  • Exclusive: If a transaction Ti has received an exclusive-condition lock on a data item Q, then Ti can both read and write Q.

As described by Silberschatz et al, the problem associated with locking protocol is that serializability is not guaranteed when a transaction unlocks a data item instantly after use. [1]

Two Phase Protocol

One protocol that is said to guarantee serializability is the Two Phase Protocol (2PL). The 2PL Protocol oversees locks by determining when transactions can acquire and release locks. [5] The 2PL protocol forces each transaction to make a lock or unlock request in two steps: [1]

  • Growing Phase: A transaction may obtain locks but may not release any locks.
  • Shrinking Phase: A transaction may release locks but not obtain any new lock.

The transaction first enters into the Growing Phase, makes requests for required locks, then gets into the Shrinking phase where it releases all locks and cannot make any more requests. [1] Transactions in 2PL Protocol should get all needed locks before getting into the unlock phase. [6]

While the 2PL protocol guarantees serializability, it does not ensure that deadlocks do not happen. Silberschatz et al point out that for a given set of transactions, there could be conflict-serializable schedules that cannot be obtained by using 2PL protocol. [1]

Also challenges arise when 2PL protocols are faced with transactions that require communication between data sets. This is becomes a challenge in distributed applications like distributed database systems. [7]

According to the Encyclopedia of Library and Information science, to avoid deadlocks in 2PL Protocols, distributed applications like database management systems, employ deadlock detection mechanisms by construction of transaction wait for graphs and detecting for cycles. Any cycle detected in the wait for graph is terminated by breaking the cycle in the wait for graphs. [7]

In distributed database systems for example, each site is required to keep and maintain a wait for graph. The nodes in the graph are the transactions requesting for or currently holding a lock. [8]The system, then should be able to detect any cycles and terminate any deadlocks.

Conclusion

The Two Phase Protocol (2PL) protocol is widely implemented in distributed database systems and research continues on how better to detect, prevent deadlocks, and optimization in the 2PL protocol in distributed database systems.

References

[1] Abraham Silberschatz, Peter B. Galvin, Greg Gagne, Operating system concepts, Edition 9, illustrated, J. Wiley & Sons, 2009, ISBN 978-0-470-12872-5, Page 264-265.

[2] Joseph M. Hellerstein, Michael Stonebraker, James Hamilton, Architecture of a Database System, Volume 1 of Foundations and trends in databases, Now Publishers Inc, 2007, ISBN 1601980787, 9781601980786, Page 81

[3] Heinrich Christian Mayr, Database and expert systems applications: 12th international conference, DEXA 2001, Munich, Germany, September 3-5, 2001 : proceedings, Volume 2113 of Lecture notes in computer science, Springer, 2001, ISBN 3540425276, 9783540425274, Page 330

[4] Gerhard Weikum, Gottfried Vossen, Transactional information systems: theory, algorithms, and the practice of concurrency control and recovery, The Morgan Kaufmann series in data management systems, Series in Data Management Systems, Morgan Kaufmann, 2002, ISBN 1558605088, 9781558605084, Page 130

[5] “TWO PHASE LOCKING.” [Online]. Available: http://research.microsoft.com/en-us/people/philbe/chapter3.pdf. [Accessed: 05-Oct-2010].

[6] Subhash Bhalla, Databases in networked information systems: 5th international workshop, DNIS 2007, Aizu-Wakamatsu, Japan, October 17-19, 2007 : proceedings, Volume 4777 of Lecture notes in computer science, LNCS sublibrary: Information systems and applications, incl. internet/web, and HCI, Springer, 2007, ISBN 354075511X, 9783540755111, Page 206

[8] Allen Kent, Encyclopedia of library and information science, Volume 65, CRC Press, 1999, ISBN 0824720652, 9780824720650, Pages 107-109, 104

[9] Majumdar Int To Dbms, Tata McGraw-Hill, 2001, ISBN 0074622390, 9780074622391, Page 450

Read Full Post »

By Kato Mivule

Operating Systems

Parallel programming involves the concurrent computation or simultaneous execution of processes or threads at the same time. [1] While Sequential programming involves a consecutive and ordered execution of processes one after another. [2]

In other words with sequential programming, processes are run one after another in a succession fashion while in parallel computing, you have multiple processes execute at the same time. With sequential programming, computation is modeled after problems with a chronological sequence of events. [3]

The program in such cases will execute a process that will in turn wait for user input, then another process is executed that processes a return according to user input creating a series of cascading events. In contrast to sequential computation, parallel programming, while processes might execute concurrently, yet sub-processes or threads might communicate and exchange signals during execution and therefore programmers have to place measures in place to allow for such transactions. [4]

It is in this type of environment that we have to think about CPU utilization. Certainly parallel computation is meant for multi processor environments. To avoid any impasses, Michael Suess in his article “ Mutual Exclusion with Locks – an Introduction”, suggests a number of ways in which to solve the problem of stalemates by implementing Mutual Exclusion which prevents multiple threads running concurrently from working on the same data at the same time. [5]

In simplest terms mutual exclusiveness is achieved by placing locks on the critical region thus allowing only one thread at a time until that thread is done and unlocks the door to the critical region, giving access to another process to access the critical region. In this case stalemates are done away with as Michael Suess mentions a number of methods in dealing with such impasses: [5]

  • Mutex: One thread at a time is allowed into critical section, other requesting threads are put to sleep until thread in critical section exits, allowing room for another thread.
  • Spinlocks: Threads continue to spin and are not put to sleep, the spin continues until thread in critical section exits, giving room to another thread.
  • Recursive Locks: an internal counter is utilized to keep track of locks and unlocks of threads so as to prevent deadlocks.
  • Timed Locks: A timer is used when accessing the critical section, should it still be utilized by another thread, then the requesting thread will do something else rather than be put to sleep of keep spinning, as such working as a time saver – do something else while you wait.
  • Hierarchical Locks: Threads with the same memory locality acquire locks consecutively.

These methods to control thread communication and execution in processes is a critical difference between sequential programming and parallel programming. However, the benefits of parallel computation out way any overheads as multiple threads can get executed concurrently, making it the main difference between sequential and parallel computation.

References

[1] “Parallel computing – Wikipedia, the free encyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/Parallel_computing. [Accessed: 29-Sep-2010].

[2] “sequence (programming) — Britannica Online Encyclopedia.” [Online]. Available: http://www.britannica.com/EBchecked/topic/1086517/sequence. [Accessed: 29-Sep-2010].

[3] Brian Harvey, Matthew Wright. “Simply scheme: introducing computer science, Edition 2, illustrated, MIT Press, 1999, ISBN0262082810, 9780262082815

[4] M. O. Tokhi, Mohammad Alamgir Hossain, Mohammad Hasan Shaheed, Parallel computing for real-time signal processing and control, Advanced textbooks in control and signal processing, Edition illustrated, Springer, 2003, ISBN 1852335998, 9781852335991

[5] “Mutual Exclusion with Locks – an Introduction » Thinking Parallel.” [Online]. Available: http://www.thinkingparallel.com/2006/09/09/mutual-exclusion-with-locks-an-introduction/. [Accessed: 29-Sep-2010].

Read Full Post »

By Kato Mivule

Operating Systems

All major current Operating Systems at least offer Process Management, Interrupts, Memory Management, File System Management, Device Drivers, Networking Capabilities, Security, and Input/Output Management. [1] These features will certainly see more innovation and the hardware and Operating Systems will get smaller in size and mobile while at the same time accomplishing greater tasks with the growth of Mobile Computing. In this article we take a look at what might be added features and advancement in Process Management, Interrupts, and File System Management.

With Process Management, Operating Systems manage how Processes or Tasks which are small programs that run in the background accomplishing various Tasks in the Computer, must utilize the CPU. This involves setting up schedules for various Processes at various times. [2] [3] As more powerful and faster Processors get into the market and their price decreases, we shall see various CPUs that are able to multi task and carry numerous concurrent Processes at the same time. At the same time Processors for Mobile Computing will get the more Powerful and Faster, that an iPhone will do much more than a Laptop today.

While with Interrupts according to Michael J. Jipping are like to special events that get the attention of the Hardware or Software to perform a certain task and as such making Operating Systems Interrupt Driven. That is Operating Systems will wait for an Interrupt to carryout a particular Task. [4] What we might see is enhanced Intelligent Operating Systems Interrupts that will not wait for normal interrupts to happen for tasks to be accomplished but these Intelligent Operating Systems Interrupts will go ahead to dynamically carryout tasks given the system history.

Intelligent Operating Systems Interrupts might as well self correct any errors or forecast errors that might happen, then cause an intelligent cascade interrupts to self correct. However, Intelligent Operating Systems Interrupts could be enhanced to implement not only Hardware Security but Software Security, and Network Security as well. For Instance, if the Intelligent Operating Systems Interrupts senses an anomaly in the Network Traffic, interrupts could be triggered calling for security action in real time.

Memory Management is another feature in current operating systems. However, with the advent of distributed systems, a number of memory resources stay redundant. Future operating systems memory management systems could in real time employ intelligence to access memory on redundant systems across the mobile computing or cloud computing spectrum.

Just like the Torrent Technology that relies on peer-to-peer networking to speed up download time, future operating systems could utilize memory utilization and sharing where millions of mobile computing devices could share memory to run a particular application on any system. The devices that don’t utilize their memory or have excess memory could as well even cash in and make money by selling off memory usage in times when such systems are redundant.

File System Management in future Operating Systems will increasingly offer server capabilities. File System Management in the Operating System offers organization of files in proper structures for quicker access. Files are organized in Directories and Directories could be grouped into larger Directories[5] Already operating systems like Linux offer file server capabilities. Therefore as more powerful and sophisticated Mobile Computing devices are created, many gadgets like the laptop and PCs will become mini-servers. What we shall continue to see is perhaps robust Intelligent Real Time Distributed File System Management with capabilities to manage files across Mobile Devices, tremendously increasing Data mining Capabilities across the Mobile Computing Spectrum.

However, this will increase the security problem as more mobile devices become robust data mining agents. It is that perhaps Intelligent Operating Systems Interrupts could be enhanced to act as Intelligent Security Agents that could be personalized to enforce Privacy and Security and even allow the user to cash in for user Data that they may be willing to sell.

These are just ideas for now and perhaps some are already being implemented.

Notes

[1] “Operating system – Wikipedia, the free encyclopedia.” [Online]. Available: http://en.wikipedia.org/wiki/Operating_system. [Accessed: 06-Sep-2010].

[2] “Operating System Process Management and the Effect on Maintenance: A Comparison of Linux, FreeBSD, and Darwin” Liguo Yu [Online]. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.144.4911 [Accessed: 07-Sep-2010].

[3] Nell Dale, John Lewis, Computer Science Illuminated Edition4, Jones & Bartlett Learning, 2009, Page 347, ISBN 0763776467,9780763776466

[4] Michael J. Jipping, Smartphone operating system concepts with Symbian OS: a tutorial guide, Volume 15 of Symbian Press, John Wiley and Sons, 2007, ISBN 0470034491,9780470034491

[5] “What is file management system? – A Word Definition From the Webopedia Computer Dictionary.” [Online]. Available: http://www.webopedia.com/TERM/F/file_management_system.html. [Accessed: 07-Sep-2010].

Read Full Post »

%d bloggers like this: