Home

Top 10 Lists: Week 08

  1. Scheduling
    CPU Scheduling is a process which CPU will decide a process to run while the other processes on hold, and when the CPU is at idle, it will pick another process to run that was on hold. There are two kinds of CPU scheduling around. The scheduling is trying to reach maximum utilization and throughput while minimizing turnaround and response time.

  2. Preemptive Scheduling
    This kind of scheduling will run a process with higher priorities than the other, even if there are a task in the run. The lower priority task holds for some time and resumes when the higher priority task finishes its execution. The scheduling is preentive if a specific process switches from the running state or waiting state to the ready state

  3. Non-Preemptive Scheduling
    In this kind of scheduling, the CPU is already allocate their room for process. If a process keeps the CPU busy, it will decide to finish the process by either switching it or terminate it. The scheduling is non-preentive if a process switches from the running to the waiting state and process finished its execution and terminated.

  4. Burst time
    Also knwon as running time or execution time. This is a method of the CPU to call how long the process will lasts.

  5. Multiprocessor Scheduling
    In the multiprocessor scheduling, there are multiple CPU’s which share the load so that various process run simultaneously. In general, the multiprocessor scheduling is complex as compared to single processor scheduling. In the multiprocessor scheduling, there are many processors and they are identical and we can run any process at any time.

  6. Thread Scheduling
    The execution of a running thread is temporarily suspended whenever the microkernel is entered as the result of a kernel call, exception, or hardware interrupt. A scheduling decision is made whenever the execution state of any thread changes—it doesn’t matter which processes the threads might reside within. Threads are scheduled globally across all processes.

  7. Non-uniform Memory Access
    NUMA (non-uniform memory access) is the phenomenon that memory at various points in the address space of a processor have different performance characteristics. At current processor speeds, the signal path length from the processor to memory plays a significant role. Increased signal path length not only increases latency to memory but also quickly becomes a throughput bottleneck if the signal path is shared by multiple processors.

  8. Asymmetric Multiprocessing
    Asymmetric Multiprocessing system is a multiprocessor computer system where not all of the multiple interconnected central processing units (CPUs) are treated equally. In asymmetric multiprocessing, only a master processor runs the tasks of the operating system. For example, AMP can be used in assigning specific tasks to CPU based on priority and importance of task completion.

  9. Symmetric Multiprocessing
    It involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, In other words, Symmetric Multiprocessing is a type of multiprocessing where each processor is self-scheduling. For example, SMP applies multiple processors to that one problem, known as parallel programming.

  10. Time Complexity Analysis (Big O)
    Time complexity of an algorithm signifies the total time required by the program to run till its completion. The time complexity of algorithms is most commonly expressed using the big O notation. It’s an asymptotic notation to represent the time complexity. Besides big O, there are other notation such as big omega, big theta, little O, and little omega.