Yahoo奇摩 網頁搜尋

搜尋結果

  1. en.wikipedia.org › wiki › Linux_kernelLinux kernel - Wikipedia

    Linux kernel. The Linux kernel is a free and open-source, [12] : 4 monolithic, modular, multitasking, Unix-like operating system kernel. It was originally written in 1991 by Linus Torvalds for his i386 -based PC, and it was soon adopted as the kernel for the GNU operating system, which was written to be a free (libre) replacement for Unix .

    • 0.02 (5 October 1991; 32 years ago)
    • GPL-2.0-only with Linux-syscall-note
    • 6.5.6, / 6 October 2023
  2. This article documents the version history of the Linux kernel. The Linux kernel is a free and open-source, monolithic, Unix-like operating system kernel. It was conceived and created in 1991 by Linus Torvalds. Linux kernels have different support levels depending

  3. 其他人也問了

  4. en.wikipedia.org › wiki › LinuxLinux - Wikipedia

    The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including ARM-based Android smartphones and the IBM Z mainframes.

    • September 17, 1991; 32 years ago
    • Unix-like
    • Random-Access Memory
    • Input/Output Devices
    • Resource Management
    • Memory Management
    • Device Management
    • System Calls
    • Kernel Design Decisions
    • Kernel-Wide Design Approaches
    • History of Kernel Development
    • See Also

    Random-access memory (RAM) is used to store both program instructions and data.[a]Typically, both need to be present in memory in order for a program to execute. Often multiple programs will want access to memory, frequently demanding more memory than the computer has available. The kernel is responsible for deciding which memory each process can u...

    I/O devices include, but are not limited to, peripherals such as keyboards, mice, disk drives, printers, USB devices, network adapters, and display devices. The kernel provides convenient methods for applications to use these devices which are typically abstracted by the kernel so that applications do not need to know their implementation details.

    Key aspects necessary in resource management are defining the execution domain (address space) and the protection mechanism used to mediate access to the resources within a domain. Kernels also provide methods for synchronization and inter-process communication(IPC). These implementations may be located within the kernel itself or the kernel can al...

    The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address....

    To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. A device driver is a computer program encapsulating, monitoring and controlling a hardware device (via its Hardware/Software Interface (HSI)) on behalf of the OS. It provides the operating syste...

    In computing, a system call is how a process requests a service from an operating system's kernel that it does not normally have permission to run. System calls provide the interface between a process and the operating system. Most operations interacting with the system require permissions not available to a user-level process, e.g., I/O performed ...

    Protection

    An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviours (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection. The mechanisms or policies provided by the kernel can be classified according to several criteria, including: static (enforced at compile time) or d...

    Process cooperation

    Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation. However this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more flexible. A number of other approaches (either lower- or higher-level) are available as well, with many modern kernels providing support for systems such as shared...

    I/O device management

    The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen (although similar ideas were suggested in 1967). In Hansen's description of this, the "common" processes are called internal processes, while the I/O devices are called external processes. Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction...

    The above listed tasks and features can be provided in many ways that differ from each other in design and implementation. The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels. Here a mechanismis the support that allows the implementation of many different policies,...

    Early operating system kernels

    Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to work without any hardware abstraction or operating system support. Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs. Eventually, small ancillary programs such as program l...

    Time-sharing operating systems

    In the decade preceding Unix, computers had grown enormously in power – to the point where computer operators were looking for new ways to get people to use their spare time on their machines. One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine. The development of time-sharing systems led to a number of problems. One was that user...

    Amiga

    The Commodore Amiga was released in 1985, and was among the first – and certainly most successful – home computers to feature an advanced kernel architecture. The AmigaOS kernel's executive component, exec.library, uses a microkernel message-passing design, but there are other kernel components, like graphics.library, that have direct access to the hardware. There is no memory protection, and the kernel is almost always running in user mode. Only special actions are executed in kernel mode, a...

  5. Linux API, Linux ABI, and in-kernel APIs and ABIs The Linux kernel provides multiple interfaces to user-space and kernel-mode code that are used for varying purposes and that have varying properties by design. There are two types of application programming

  6. A typical Linux distribution comprises a Linux kernel, an init system (such as systemd, OpenRC, or runit ), GNU tools and libraries, documentation, and many other types of software (such as IP network configuration utilities and the getty TTY setup program, among others).

  7. Linus Benedict Torvalds ( / ˈliːnəs ˈtɔːrvɔːldz / LEE-nəs TOR-vawldz, [2] Finland Swedish: [ˈliːnʉs ˈtuːrvɑlds] ⓘ; born 28 December 1969) is a Finnish-American software engineer who is the creator and lead developer of the Linux kernel. He also created the distributed version control system Git .