There's one major distinction between an intranet and the Internet: The Internet is an open, public space, while an intranet is designed to be a private space. An intranet may be accessible from the Internet, but as a rule it's protected by a password and accessible only to employees or other authorized users. From within a company, an intranet server may respond much more quickly than a typical Web site. This is because the public Internet is at the mercy of traffic spikes, server breakdowns and other problems that may slow the network. Within a company, however, users have much more bandwidth and network hardware may be more reliable. This makes it easier to serve high bandwidth content, such as audio and video, over an intranet. (unless you work for the United States Marine Corps. Then you don't get to watch video's. And they Block 80% of any kind of "fun" or entertaining website available on the Internet) The Extranet is a portion of an organization's Intranet that is made accessible to authorized outside users without full access to an entire organization's intranet.
Note: There are comments associated with this question. See the discussion page to add to the conversation.
Selasa, 29 Jun 2010
Ahad, 20 Jun 2010
Network Topology
In computer networking, topology refers to the layout of connected devices.
Network topology is defined as the interconnection of the various elements (links, nodes, etc.) of a computer network.[1][2] Network Topologies can be physical or logical. Physical Topology means the physical design of a network including the devices, location and cable installation. Logical topology refers to the fact that how data actually transfers in a network as opposed to its physical design.
Topology can be considered as a virtual shape or structure of a network. This shape actually does not correspond to the actual physical design of the devices on the computer network. The computers on the home network can be arranged in a circle shape but it does not necessarily mean that it presents a ring topology.
Any particular network topology is determined only by the graphical mapping of the configuration of physical and/or logical connections between nodes. The study of network topology uses graph theory. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ in two networks and yet their topologies may be identical.
A Local Area Network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN has one or more links to one or more nodes in the network and the mapping of these links and nodes in a graph results in a geometrical shape that may be used to describe the physical topology of the network. Likewise, the mapping of the data flow between the nodes in the network determines the logical topology of the network. The physical and logical topologies may or may not be identical in any particular network.
Network topology is defined as the interconnection of the various elements (links, nodes, etc.) of a computer network.[1][2] Network Topologies can be physical or logical. Physical Topology means the physical design of a network including the devices, location and cable installation. Logical topology refers to the fact that how data actually transfers in a network as opposed to its physical design.
Topology can be considered as a virtual shape or structure of a network. This shape actually does not correspond to the actual physical design of the devices on the computer network. The computers on the home network can be arranged in a circle shape but it does not necessarily mean that it presents a ring topology.
Any particular network topology is determined only by the graphical mapping of the configuration of physical and/or logical connections between nodes. The study of network topology uses graph theory. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ in two networks and yet their topologies may be identical.
A Local Area Network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN has one or more links to one or more nodes in the network and the mapping of these links and nodes in a graph results in a geometrical shape that may be used to describe the physical topology of the network. Likewise, the mapping of the data flow between the nodes in the network determines the logical topology of the network. The physical and logical topologies may or may not be identical in any particular network.
Computer Network
A computer network, often simply referred to as a network, is a collection of computers and devices connected by communications channels that facilitates communications among users and allows users to share resources with other users. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of types and categories and also presents the basic components of a network.
Rabu, 5 Mei 2010
Operating System
An operating system is the software on a computer that manages the way different programs use its hardware, and regulates the ways that a user controls the computer.[1][2] Operating systems are found on almost any device that contains a computer with multiple programs—from cellular phones and video game consoles to supercomputers and web servers. Some popular modern operating systems for personal computers include Microsoft Windows, Mac OS X, and Linux[3] (see also: list of operating systems, comparison of operating systems).
Because early computers were often built for only a single task, operating systems did not exist in their proper form until the 1960s.[4] As computers evolved into being devices that could run different programs in succession, programmers began putting libraries of common programs (in the form of computer code) onto the computer in order to avoid duplication and speed up the process. Eventually, computers began being built to automatically switch from one task to the next. The creation of runtime libraries to manage processing and printing speed came next, which evolved into programs that could interpret different types of programming languages into machine code. When personal computers by companies such as Apple Inc., Atari, IBM and Amiga became popular in the 1980s, vendors began adding features such as software scheduling and hardware maintenance.
An operating system can be divided into many different parts. One of the most important parts is the kernel, which controls low-level processes that the average user usually cannot see: it controls how memory is read and written, the order in which processes are executed, how information is received and sent by devices like the monitor, keyboard and mouse, and deciding how to interpret information received by networks. The user interface is the part of the operating system that interacts with the computer user directly, allowing them to control and use programs. The user interface may be graphical with icons and a desktop, or textual, with a command line. Another similar feature is an Application programming interface, which is a set of services and code libraries that let applications interact with one another, as well as the operating system itself. Depending on the operating system, many of these components may not be considered an actual part. For example, Windows considers its user interface to be part of the operating system, while many versions of Linux do not.
Because early computers were often built for only a single task, operating systems did not exist in their proper form until the 1960s.[4] As computers evolved into being devices that could run different programs in succession, programmers began putting libraries of common programs (in the form of computer code) onto the computer in order to avoid duplication and speed up the process. Eventually, computers began being built to automatically switch from one task to the next. The creation of runtime libraries to manage processing and printing speed came next, which evolved into programs that could interpret different types of programming languages into machine code. When personal computers by companies such as Apple Inc., Atari, IBM and Amiga became popular in the 1980s, vendors began adding features such as software scheduling and hardware maintenance.
An operating system can be divided into many different parts. One of the most important parts is the kernel, which controls low-level processes that the average user usually cannot see: it controls how memory is read and written, the order in which processes are executed, how information is received and sent by devices like the monitor, keyboard and mouse, and deciding how to interpret information received by networks. The user interface is the part of the operating system that interacts with the computer user directly, allowing them to control and use programs. The user interface may be graphical with icons and a desktop, or textual, with a command line. Another similar feature is an Application programming interface, which is a set of services and code libraries that let applications interact with one another, as well as the operating system itself. Depending on the operating system, many of these components may not be considered an actual part. For example, Windows considers its user interface to be part of the operating system, while many versions of Linux do not.
Secondary Storage
Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also two orders of magnitude less expensive than primary storage. Consequently, modern computer systems typically have two orders of magnitude more secondary storage than primary storage and data is kept for a longer time there.
In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the very significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.
When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.[2]
Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.
The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded.
In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the very significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.
When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.[2]
Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.
The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded.
Primary Storage
Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.[1]
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.[1]
Storage
Computer data storage, often called storage or memory, refers to computer components, devices, and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention. It is one of the fundamental components of all modern computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used since the 1940s.
In contemporary usage, memory usually refers to a form of semiconductor storage known as random-access memory (RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass storage — optical discs, forms of magnetic storage like hard disk drives, and other types slower than RAM, but of a more permanent nature. Historically, memory and storage were respectively called main memory and secondary storage. The terms internal memory and external memory are also used.
The contemporary distinctions are helpful, because they are also fundamental to the architecture of computers in general. The distinctions also reflect an important and significant technical difference between memory and mass storage devices, which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional nomenclature.
In contemporary usage, memory usually refers to a form of semiconductor storage known as random-access memory (RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass storage — optical discs, forms of magnetic storage like hard disk drives, and other types slower than RAM, but of a more permanent nature. Historically, memory and storage were respectively called main memory and secondary storage. The terms internal memory and external memory are also used.
The contemporary distinctions are helpful, because they are also fundamental to the architecture of computers in general. The distinctions also reflect an important and significant technical difference between memory and mass storage devices, which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional nomenclature.
Rabu, 21 April 2010
IP Address
An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network, that uses the Internet Protocol for communication between its nodes.[1] An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there."[2]
The designers of TCP/IP defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995[3] and last standardized by RFC 2460 in 1998.[4] Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).
The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a subnetwork. The number of these bits is indicated in CIDR notation, appended to the IP address; e.g., 208.77.188.166/24.
As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.
The Internet Assigned Numbers Authority (IANA), which manages the IP address space allocations globally, cooperates with five Regional Internet Registries (RIRs) to allocate IP address blocks to Local Internet Registries (Internet service providers) and other entities.
The designers of TCP/IP defined an IP address as a 32-bit number[1] and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995[3] and last standardized by RFC 2460 in 1998.[4] Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).
The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a subnetwork. The number of these bits is indicated in CIDR notation, appended to the IP address; e.g., 208.77.188.166/24.
As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.
The Internet Assigned Numbers Authority (IANA), which manages the IP address space allocations globally, cooperates with five Regional Internet Registries (RIRs) to allocate IP address blocks to Local Internet Registries (Internet service providers) and other entities.
Colour code
Authors of web pages have a variety of options available for specifying colors for elements of web documents. Colors may be specified as an RGB triplet in hexadecimal format (a hex triplet); they may also be specified according to their common English names in some cases. Often a color tool or other graphics software is used to generate color values.
The first versions of Mosaic and Netscape Navigator used the X11 color names as the basis for their color lists, as both started as X Window System applications.[3]
Web colors have an unambiguous colorimetric definition, sRGB, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions.[4] These have been chosen to be similar to many real-world monitors and viewing conditions, so that even without color management rendering is fairly close to the specified values. However, user agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web-to-print applications.
The first versions of Mosaic and Netscape Navigator used the X11 color names as the basis for their color lists, as both started as X Window System applications.[3]
Web colors have an unambiguous colorimetric definition, sRGB, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions.[4] These have been chosen to be similar to many real-world monitors and viewing conditions, so that even without color management rendering is fairly close to the specified values. However, user agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web-to-print applications.
Straight cable
You usually use straight cable to connect different type of devices. This type of cable will be used most of the time and can be used to:
1) Connect a computer to a switch/hub's normal port.2) Connect a computer to a cable/DSL modem's LAN port. 3) Connect a router's WAN port to a cable/DSL modem's LAN port.4) Connect a router's LAN port to a switch/hub's uplink port. (normally used for expanding network)5) Connect 2 switches/hubs with one of the switch/hub using an uplink port and the other one using normal port.
1) Connect a computer to a switch/hub's normal port.2) Connect a computer to a cable/DSL modem's LAN port. 3) Connect a router's WAN port to a cable/DSL modem's LAN port.4) Connect a router's LAN port to a switch/hub's uplink port. (normally used for expanding network)5) Connect 2 switches/hubs with one of the switch/hub using an uplink port and the other one using normal port.
Cross cable
A crossover cable connects two devices of the same type, for example DTE-DTE or DCE-DCE, usually connected asymmetrically (DTE-DCE), by a modified cable called a crosslink. Such distinction of devices was introduced by IBM.
The crossing wires in a cable or in a connector adaptor allows:
connecting two devices directly, output of one to input of the other,
letting two terminal (DTE) devices communicate without an interconnecting hub knot, i.e. PCs,
linking two or more hubs, switches or routers (DCE) together, possibly to work as one wider device.
The crossing wires in a cable or in a connector adaptor allows:
connecting two devices directly, output of one to input of the other,
letting two terminal (DTE) devices communicate without an interconnecting hub knot, i.e. PCs,
linking two or more hubs, switches or routers (DCE) together, possibly to work as one wider device.
CPU
A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.
When the processor needs to read from or write to a location in main memory, it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache, which is much faster than reading from or writing to main memory.
Most modern desktop and server CPUs have at least three independent caches: an instruction cache to speed up executable instruction fetch, a data cache to speed up data fetch and store, and a translation lookaside buffer used to speed up virtual-to-physical address translation for both executable instructions and data.
When the processor needs to read from or write to a location in main memory, it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache, which is much faster than reading from or writing to main memory.
Most modern desktop and server CPUs have at least three independent caches: an instruction cache to speed up executable instruction fetch, a data cache to speed up data fetch and store, and a translation lookaside buffer used to speed up virtual-to-physical address translation for both executable instructions and data.
Selasa, 23 Mac 2010
Mouse
A mouse (plural mice) is a small mammal belonging to the order of rodents. The best known mouse species is the common house mouse (Mus musculus). It is also a popular pet. In some places, certain kinds of field mice are also common. This rodent is eaten by large birds such as hawks and eagles. They are known to invade homes for food and occasionally shelter.
The American White-footed Mouse (Peromyscus leucopus) and the deer mouse (Peromyscus maniculatus), as well as other common species of mouse-like rodents around the world, also sometimes live in houses. These, however, are in other genera.
Although mice may live up to two and a half years in captivity, the average mouse in the wild lives only about four months,[citation needed] primarily owing to heavy predation. Cats, wild dogs, foxes, birds of prey, snakes and even certain kinds of arthropods have been known to prey heavily upon mice. Nevertheless, because of its remarkable adaptability to almost any environment, and its ability to live commensally with humans, the mouse is one of the most successful mammalian genera living on Earth today.
Mice can at times be harmful rodents, damaging and eating crops[1], causing structural damages and spreading diseases through their parasites and feces[2]. In North America, breathing dust that has come in contact with mouse excrements has been linked to hantavirus, which may lead to Hantavirus Pulmonary Syndrome (HPS). The original motivation for the domestication of cats is thought to have been for their predation of mice and their relatives, the rats.[citation needed]
Primarily nocturnal animals, mice compensate for their poor eyesight with a keen sense of hearing, and rely especially on their sense of smell to locate food and avoid predators
The American White-footed Mouse (Peromyscus leucopus) and the deer mouse (Peromyscus maniculatus), as well as other common species of mouse-like rodents around the world, also sometimes live in houses. These, however, are in other genera.
Although mice may live up to two and a half years in captivity, the average mouse in the wild lives only about four months,[citation needed] primarily owing to heavy predation. Cats, wild dogs, foxes, birds of prey, snakes and even certain kinds of arthropods have been known to prey heavily upon mice. Nevertheless, because of its remarkable adaptability to almost any environment, and its ability to live commensally with humans, the mouse is one of the most successful mammalian genera living on Earth today.
Mice can at times be harmful rodents, damaging and eating crops[1], causing structural damages and spreading diseases through their parasites and feces[2]. In North America, breathing dust that has come in contact with mouse excrements has been linked to hantavirus, which may lead to Hantavirus Pulmonary Syndrome (HPS). The original motivation for the domestication of cats is thought to have been for their predation of mice and their relatives, the rats.[citation needed]
Primarily nocturnal animals, mice compensate for their poor eyesight with a keen sense of hearing, and rely especially on their sense of smell to locate food and avoid predators
Isnin, 22 Mac 2010
Virus
A virus (from the Latin virus meaning toxin or poison) is a small infectious agent that can replicate only inside the cells of other organisms. Most viruses are too small to be seen directly with a light microscope. Viruses infect all types of organisms, from animals and plants to bacteria and archaea.[1] Since the initial discovery of tobacco mosaic virus by Martinus Beijerinck in 1898,[2] about 5,000 viruses have been described in detail,[3] although there are millions of different types.[4] Viruses are found in almost every ecosystem on Earth and these minute structures are the most abundant type of biological entity.[5][6] The study of viruses is known as virology, a sub-specialty of microbiology.
Unlike prions and viroids, viruses consist of two or three parts: all viruses have genes made from either DNA or RNA, long molecules that carry genetic information; all have a protein coat that protects these genes; and some have an envelope of fat that surrounds them when they are outside a cell. (Viroids do not have a protein coat and prions contain no RNA or DNA.) Viruses vary from simple helical and icosahedral shapes to more complex structures. Most viruses are about one hundred times smaller than an average bacterium. The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity.[7]
Viruses spread in many ways; plant viruses are often transmitted from plant to plant by insects that feed on sap, such as aphids, while animal viruses can be carried by blood-sucking insects. These disease-bearing organisms are known as vectors. Influenza viruses are spread by coughing and sneezing. The norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal-oral route and are passed from person to person by contact, entering the body in food or water. HIV is one of several viruses transmitted through sexual contact and by exposure to infected blood.
Viral infections in animals provoke an immune response that usually eliminates the infecting virus. Immune responses can also be produced by vaccines, which confer an artificially acquired immunity to the specific viral infection. However, some viruses including those causing HIV and viral hepatitis evade these immune responses and result in chronic infections. Microorganisms also have defences against viral infection, such as restriction modification systems which restrict the growth of viruses. Antibiotics have no effect on viruses, but several antiviral drugs have been developed.
Unlike prions and viroids, viruses consist of two or three parts: all viruses have genes made from either DNA or RNA, long molecules that carry genetic information; all have a protein coat that protects these genes; and some have an envelope of fat that surrounds them when they are outside a cell. (Viroids do not have a protein coat and prions contain no RNA or DNA.) Viruses vary from simple helical and icosahedral shapes to more complex structures. Most viruses are about one hundred times smaller than an average bacterium. The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity.[7]
Viruses spread in many ways; plant viruses are often transmitted from plant to plant by insects that feed on sap, such as aphids, while animal viruses can be carried by blood-sucking insects. These disease-bearing organisms are known as vectors. Influenza viruses are spread by coughing and sneezing. The norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal-oral route and are passed from person to person by contact, entering the body in food or water. HIV is one of several viruses transmitted through sexual contact and by exposure to infected blood.
Viral infections in animals provoke an immune response that usually eliminates the infecting virus. Immune responses can also be produced by vaccines, which confer an artificially acquired immunity to the specific viral infection. However, some viruses including those causing HIV and viral hepatitis evade these immune responses and result in chronic infections. Microorganisms also have defences against viral infection, such as restriction modification systems which restrict the growth of viruses. Antibiotics have no effect on viruses, but several antiviral drugs have been developed.
Langgan:
Catatan (Atom)