Intrusion prevention system

Intrusion prevention systems (IPS), alsoknown as intrusion detection and prevention systems (IDPS), are networksecurity appliances that monitor network and/or system activities for maliciousactivity. The main functions of intrusion prevention systems are to identifymalicious activity, log information about this activity, attempt to block/stopit, and report it.

Intrusion prevention systems are consideredextensions of intrusion detection systems because they both monitor networktraffic and/or system activities for malicious activity. The main differencesare, unlike intrusion detection systems, intrusion prevention systems areplaced in-line and are supposed to be able to actively prevent/block intrusionsthat are detected. IPS can take such actions as sending an alarm, droppingdetected malicious packets, resetting a connection and/or blocking traffic fromthe offending IP address. An IPS also can correct Cyclic Redundancy Check (CRC) errors, defragment packet streams,mitigate TCP sequencing issues, and clean up unwanted transport and networklayer options. An ideal IPS would accomplish all of these functions. However,software-based IPS themselves often are the victims of malware that edits theIPS signature file so that the malware itself can move through the IPS.

Classifications

Intrusion prevention systems can be classifiedinto four different types:

1.    Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious traffic by analyzing protocolactivity.

2.    Wireless intrusion prevention systems (WIPS): monitors a wires network for suspicious traffic by analyzingwireless networking protocols.

3.    Network behavior analysis (NBA):examines network traffic to identify to identify threats that generate unusualtraffic flows, such as distributed denial of service (DDos) attacks, certainforms of malware and policy violations.

4.    Host-based intrusion prevention system (HIPS): an installed software package which monitors a single host forsuspicious activity by analyzing events occurring within that host.

Detection methods

The majority of intrusion preventionsystems utilize one of three detection methods: signature-based, statistical anomaly-basedand stateful protocol analysis.

1.    Signature-Based Detection: Signaturebased IDS monitors packets in the Network and compares with pre-configured andpre-determined attack patterns known as signatures.

2.    Statistical anomaly-based detection: Astatistical anomaly-based IDS determines the normal network activity – likewhat sort of bandwidth is generally used, what protocols are used, what portsand devices generally connect to each other – and alerts the administrator oruser when traffic is detected which is anomalous (not normal).

3.    Stateful Protocol Analysis Detection:This method identifies deviations of protocol states by comparing observedevents with “predetermined profiles of generally accepted definitions of benignactivity.”

From: https://en.wikipedia.org/wiki/Intrusion_prevention_system


'Computer Science > Terminology' 카테고리의 다른 글

Virtual Private Network (VPN)  (0) 2018.03.30
Security Information and Event Management (SIEM)  (0) 2018.03.30
Application firewall  (0) 2018.03.30
Open System Interconnection Protocols  (0) 2018.03.30
Stateful Firewall  (0) 2018.03.30

Application firewall

An application firewall is a form offirewall that controls input, output, and/or access from, to, or by anapplication or service. It operates by monitoring and potentially blocking theinput, output, or system service calls that do not meet the configured policyof the firewall. The application firewall is typically built to control all networktraffic on any OSI layer up to theapplication layer. It is able to control applications or services specifically,unlike a stateful network firewall, which is without additional software –unable to control network traffic regarding a specific application. There aretwo primary categories of application firewalls, network-based application firewalls and host-based application firewalls.

Network-based application firewalls

A network-based application layer firewallis a computer networking firewall operating at the application layer of aprotocol stack, and is also known as a proxy-based or reverse-proxy firewall. Application firewalls specific to a particular kind of network traffic may betitled with the service name, such as a web application firewall. They may be implementedthrough software running on a host or stand-alone piece of network hardware. Often,it is a host using various forms of proxy servers to proxy traffic beforepassing it on to the client or server. Because it acts on the applicationlayer, it may inspect the contents of traffic, blocking specified content, suchas certain websites, viruses, or attempts to exploit known logical flaws inclient software.

From: https://en.wikipedia.org/wiki/Application_firewall


'Computer Science > Terminology' 카테고리의 다른 글

Security Information and Event Management (SIEM)  (0) 2018.03.30
Instrusion prevention system (IPS)  (0) 2018.03.30
Open System Interconnection Protocols  (0) 2018.03.30
Stateful Firewall  (0) 2018.03.30
Packet-filtering Firewall  (0) 2018.03.30

Open System Interconnection Protocols

The Open System Interconnection (OSI)protocol suite is comprised of numerous standard protocols that are based onthe OSI reference model. These protocols are part of an international programto develop data-networking protocols and other standards that facilitate multivendorequipment interoperability. The OSI program grew out of a need forinternational networking standards and is designed to facilitate communicationbetween hardware and software systems despite differences in underlyingarchitectures.

The OSI specifications were conceived and implementedby two international standards organizations: the International Organizationfor Standardization (ISO) and the International Telecommunications StandardsSector (ITU-T). This article provides a summary of the OSI protocol suite andillustrates its mapping to the general OSI reference model.

From: http://docwiki.cisco.com/wiki/Open_System_Interconnection_Protocols


'Computer Science > Terminology' 카테고리의 다른 글

Instrusion prevention system (IPS)  (0) 2018.03.30
Application firewall  (0) 2018.03.30
Stateful Firewall  (0) 2018.03.30
Packet-filtering Firewall  (0) 2018.03.30
User Datagram Protocol (UDP)  (0) 2018.03.30

Stateful firewall

In computing, a stateful firewall is anetwork firewall that tracks the operating state and characteristics of networkconnections traversing it. The firewall is configured to distinguish legitimatepackets for different types if connections. Only packets matching a knownactive connection are allowed to pass the firewall.

Stateful packet inspection (SPI), alsoreferred to as dynamic packet filtering, is a security feature often includedin business networks.

A stateful firewall keeps track of thestate of network connections (such as TCP streams or UDP communication) and isable to hold significant attributes of each connection in memory. Theseattributes are collectively known as the state of the connection, and mayinclude such details as the IP addresses and ports involved in the connectionand the sequence numbers of the packets traversing the connection. Stateful inspectionmonitors incoming and outgoing packets over time, as well as the state of theconnection, and stores the data in dynamic state tables. This cumulative datais evaluated, so that filtering decisions would not only be based onadministrator-defined rules, but also on context that has been built byprevious connections as well as previous packets belonging to the sameconnection.

From: https://en.wikipedia.org/wiki/Stateful_firewall


'Computer Science > Terminology' 카테고리의 다른 글

Application firewall  (0) 2018.03.30
Open System Interconnection Protocols  (0) 2018.03.30
Packet-filtering Firewall  (0) 2018.03.30
User Datagram Protocol (UDP)  (0) 2018.03.30
Transmission Control Protocol (TCP)  (0) 2018.03.30

Packet-filtering

On the Internet, packet filtering is theprocess of passing or blocking packets at a network interface based on source and destination addresses, ports, or protocols. The process is used inconjunction with packet mangling and Network Address Translation (NAT). Packet filtering is often part of a firewallprogram for protecting a local network from unwanted intrusion.

In a software firewall, packet filtering isdone by a program called a packet filter. The packet filter examines the headerof each packet based on a specific set of rules, and on that basis, decides toprevent it from passing (called DROP) or allow it to pass (called ACCEPT).

There are three ways in which a packetfilter can be configured, once the set of filtering rules has been defined. Inthe first method, the filter accepts only those packets that it is certain aresafe, dropping all others. This is the most secure mode, but it can causeinconvenience if legitimate packets are inadvertently dropped. In the secondmethod, the filter drops only the packets that it is certain are unsafe,accepting all others. This mode is the least secure, but causes less inconvenience,particularly in casual Web browsing. In the third method, if the filterencounters a packet for which its rules do not provide instructions, thatpacket can be quarantined or the user can be specifically queried concerningwhat should be done with it. This can be inconvenient if it causes numerousdialog boxes to appear, for example, during Web browsing.

Packet Filtering Firewall: An Introduction

The Packet Filtering Firewall is one of themost basic firewalls. The first step in protecting internal users from theexternal network threats is to implement this type of security. The first everfirewalls used were of packet filtering type only. As the trends of networkthreats started changing, so did the firewall building strategies. Most of the routers have packet filtering built-in,but the problem with the outers is that, they are difficult to configure anddon’t provide extensive logs of the incidents.

To star with the network security, thepacket filtering firewalls are the way to go. This functionality is still themain aim of most of the commercial and non-commercial firewalls. As you know bythe definition and the purpose of the firewall, the firewall is the firstdestination for the traffic coming to your internal network. So, anything whichcomes to your internal network passes through the firewall. Of course, reverseis also true. Any outgoing traffic will also pass through the firewall beforeleaving your network completely. This is the reason that sometimes this type offirewall filter is also called screening routers.

Types of Packet Filtering

Packet filtering firewall allows only thosepackets to pass, which are allowed as per your firewall policy. Each packetpassing through is inspected and then the firewall decide to pass is or not.The packet filtering can be divided into two parts:

1.    Stateless packet filtering.

2.    Stateful packet filtering.

The data travels through the internet inthe form of packets. Each packet has a header which provides the informationabout the packet, its source and destination etc. The packet filteringfirewalls inspects these packets to allow or deny them. The information may ormay not be remembered by the firewall.

Stateless Packet Filtering

If the information about the passing is notremembered by the firewall, then this type of filtering is called statelesspacket filtering. This type of firewalls is not smart enough and can be fooledvery easily by the hackers. These are especially dangerous for UDP type of data packets. The reason isthat, the allow/deny decisions are taken on packet by packet basis and theseare not related to the previous allowed/denied packets.

StatefulPacket Filtering

If the firewall remembers the informationabout the previously passed packets, then that type of filtering is statefulpacket filtering. These can be termed as start firewalls. This type offiltering is also known as Dynamic packet filtering.

Important Features of Packet Filters

The great firewalls normally follow fewspecific rules upon which features are incorporated during firewall designing. Feware listed below:

1.    The firewall should providegood deal of logs. The more detailed are the logs, the better the protection.

2.    The command line syntax or GUIof firewall should be easy to create new rules and of course firewallexceptions.

3.    The packet filter orders shouldbe evaluated carefully in order to make the filtering fruitful.

Form: http://searchnetworking.techtarget.com/definition/packet-filtering

From: http://securityworld.worldiswelcome.com/packet-filtering-firewall-an-introduction


'Computer Science > Terminology' 카테고리의 다른 글

Open System Interconnection Protocols  (0) 2018.03.30
Stateful Firewall  (0) 2018.03.30
User Datagram Protocol (UDP)  (0) 2018.03.30
Transmission Control Protocol (TCP)  (0) 2018.03.30
Internet Protocol (IP)  (0) 2018.03.30

User Datagram Protocol

The User Datagram Protocol (UDP) is one ofthe core members of the Internet protocol suite. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.

UDP uses a simple connectionless transmission model with a minimum of protocol mechanism. It has no handshaking dialogues, and thus exposes the user’s program to any unreliability of the underlying network protocol. There is no guarantee of delivery, ordering, or duplicate protection. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram.

With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without prior communications to set up special transmission channels or data paths. UDP is suitable for purposes where error checking and correction is either not necessary or is performed in the application, avoiding the overhead of such processing atthe network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system. If error correction facilities are needed atthe network interface level, an application may use the Transmission Control Protocol (TCP) or Streaming Control Transmission Protocol (SCTP) which are designed for this purpose.

Packet structure

UDP is a minimal message-oriented Transport Layer protocol that is documented in IETF RFC 768.

UDP provides no guarantees to the upper layer protocol for message delivery and the UDP layer retains no state of UDP messages once sent. For this reason, UDP sometimes is referred to as Unreliable Datagram Protocol.

UDP provides application multiplexing (via port numbers) and integrity verification (via checksum) of the header and payload. If transmission reliability is desired, it must be implemented in the user’s application.

The UDP header consists of 4 fields, each of which is 2 bytes (16 bits). The use of the fields “Checksum” and “Source port” is optional in IPv4. In IPv6 only the source port is optional.

-      Source port number: This field identifies the sender’s port when meaningful and should be assumed to be the port to reply to if needed. If not used, then it should be zero. If the source host is the client, the port number is likely to be an ephemeral port number.

-      Destination port number: This field identifies the receiver’s port and is required. Similar to source port number, if the client is the destination host then the port number will likely be an ephemeral port number and if the destination host is the server then the port number will likely be a well-known port number.

-      Length: A field that specifies the length in bytes of the UDP header and UDP data. The minimum length is 8 bytes because that is the length of the header. The field size sets a theoretical limit of 65,535 bytes (8 byte header + 65,527 bytes of data) for a UDP datagram. The practical limit for the data length which is imposed by the underlying IPv4 protocol is 65,507 bytes (65,535 – 8 byte UDP header – 20 byte IP header).

-      Checksum: The checksum field may be used for error-checking of the header and data. This field is optional in IPv4, and mandatory in IPv6. The field carries all-zeros if unused.

Reliability and congestion control solutions

Lacking reliability, UDP applications must generally be willing to accept some loss, errors or duplication. Some applications, such as TFTP, may add rudimentary reliability mechanisms into the application layer as needed.

Most often, UDP applications do not employer liability mechanisms and may even be hindered by them. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications that often use UDP. In the separticular applications, loss of packets is not usually a fatal problem. If anapplication requires a high degree of reliability, a protocol such as the Transmission Control Protocol may be used instead.

From: https://en.wikipedia.org/wiki/User_Datagram_Protocol


'Computer Science > Terminology' 카테고리의 다른 글

Stateful Firewall  (0) 2018.03.30
Packet-filtering Firewall  (0) 2018.03.30
Transmission Control Protocol (TCP)  (0) 2018.03.30
Internet Protocol (IP)  (0) 2018.03.30
Protocol  (0) 2018.03.30

Transmission Control Protocol

The Transmission Control Protocol (TCP) is a core protocol of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entries suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets between applications running on hosts communicating over an IP network. Major Internet applications such as the World Wide Web, email, remote administration and file transfer relyon TCP. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP), which provides a connectionless datagram service that emphasizes reduced latency over reliability.

Network function

The Transmission Control Protocol provides a communication service at an intermediate level between an application programand the Internet Protocol. It provides host-to-host connectivity at the Transport Layer of the Internet model. An application does not need to know the particular mechanisms for sending data via a link to another host, such as the required packet fragmentation on the transmission medium. At the transport layer, the protocol handles allhandshaking and transmission details and presents an abstraction of the network connection to the application.

At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost data, rearranges out-of-order data, and even helps minimize network congestion to reduce the occurrence of the other problems. If the data still remains undelivered, its source is notified of this failure. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the receiving application. Thus, TCP abstracts the application’s communication from the underlying networking details.

TCP is utilized extensively by many popular applications carried on the Internet, including the World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and many streaming media applications.

TCP is optimized for accurate delivery rather than timely delivery, and therefore, TCP sometimes incurs relatively long delays (on the order of seconds) while waiting for out-of-order messages or retransmissions of lost messages. It is not particularly suitable for real-time applications such as Voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) running over the User Datagram Protocol (UDP) are usually recommended instead.

TCP is a reliable stream delivery service which guarantees that all bytes received will be identical with bytes sent and in the correct order. Since packet transfer over many networks is not reliable, a technique known as positive acknowledgment with retransmission is used to guarantee reliability of packet transfers. This fundamental technique requiresthe receiver to respond with an acknowledgment message as it receives the data. The sender keeps a record of each packet it sends. The sender also maintains a timer from when the packet was sent, and retransmits a packet if the timer expires before the message has been acknowledged. The timer is needed in case a packet gets lost or corrupted.

While IP handles actual delivery of the data, TCP keeps track of the individual units of data transmission, called segment that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a web server, the TCP software layer of that server divides the sequence of octets of the file into segments and forwards them individually to the IP software layer (Internet Layer). The Internet Layer encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP layer (Transport Layer) reassembles the individual segments, and ensures they are correctly ordered and error free a sit streams them to an application.

From: https://en.wikipedia.org/wiki/Transmission_Control_Protocol


'Computer Science > Terminology' 카테고리의 다른 글

Packet-filtering Firewall  (0) 2018.03.30
User Datagram Protocol (UDP)  (0) 2018.03.30
Internet Protocol (IP)  (0) 2018.03.30
Protocol  (0) 2018.03.30
Packet  (0) 2018.03.30

Internet Protocol

The Internet Protocol (IP) is the principal communications protocol in the internet protocol suite for relaying datagram across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.

IP has the task of delivering packets from the source host to the destination host to the solely based on the OP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.

Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob kahn in 1974; the other being the connection-oriented Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.

Function


The Internet Protocol is responsible for addressing hosts andfor routing datagrams (packets) from a source host to a destination host across one or more IP networks. For this purpose, the Internet Protocol defines the format of packets and provides an addressing system that has two functions: Identifying hosts and providing a logical location service.

-      Datagram construction: Each datagram has two components: a header and a payload. The IP header is tagged with the source IP address, the destination IP address, and other meta-data needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation.

-      IP addressing and routing: IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into networks and subnetworks, involving the designation of network or routing prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another viaspecially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network.
IP routing is also common in local networks. For example, many Ethernet switches support IP multicast operations. These switches use IP addresses and Internet Group Management Protocol to control multicast routing but use
MAC addresses for the actual routing.

Link capacity and capability

The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested, even if the path is available and reliable. One of the technical constraints is the size of data packets allowed on a given link. An application must assure that it uses proper transmission characteristics. Some of this responsibility lies also in the upper layer protocols. Facilities exist to examine the maximum transmission unit(MTU) size of the local link and Path MTU Discovery can be used for the entire projected path the destination. The IPv4 internetworking layer has the capability to automatically fragment the original datagram into smaller units for transmission. In this case, IP provides re-ordering of fragments delivered out of order.

The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and the Internet Control Message Protocol (ICMP) disregard MTU size, thereby forcing IP to fragment oversized datagrams.

From: https://en.wikipedia.org/wiki/Internet_Protocol


'Computer Science > Terminology' 카테고리의 다른 글

User Datagram Protocol (UDP)  (0) 2018.03.30
Transmission Control Protocol (TCP)  (0) 2018.03.30
Protocol  (0) 2018.03.30
Packet  (0) 2018.03.30
Security in the Internet of Things  (0) 2018.03.30

Protocol

In information technology, a protocol is the special set of rules that end points in a telecommunication connection use when they communicate. Protocols specify interactions between the communicating entities.

Protocols exist at several levels in at elecommunication connection. For example, there are protocols for the data interchange at the hardware device level and protocols for data interchange at the application program level. In the standard model known as Open Systems Interconnection (OSI), there are one or more protocols at each layer in the telecommunication exchange that both ends of the exchange that both ends of the exchange must recognize and observe. Protocols are often described in an industry or international standard.

The TCP/IP internet protocols, a common example, consist of:

-      Transmission Control Protocol (TCP), which uses a set of rules of exchange messages with other Internet points at the information packet level

-      Internet Protocol (IP), Which uses a set of rules to send and receive messages at the Internet address level

-      Additional protocols that include the Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP), each with defined sets of rules to use with corresponding programs elsewhere on the Internet.

There are many other Internet protocols, such as the Border Gateway Protocol (BGP) and the Dynamic Host Configuration Protocol (DHCP).

Communications protocol

In telecommunications, a communication protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. These are the rules or standard that defines the syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.

Communicating systems use well-defined formats for exchanging messages. Each message has an exact meaning intended to elicit a response from a range of possible responses pre-determined for that particular situation. The specified behavior is typically independent of how itis to be implemented. Communications protocols have to be agreed upon by the parties involved. To reach agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communications what programming languages are to computations.

Basic requirements of protocols

Messages are sent and received on communicating systems to establish communications. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed:

-      Data format for data exchange. Digital message bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header area and the data area. The actual message is stored in the data area, so the header area contains the fields with more relevance to the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size.

-      Address formats for data exchange. Addresses are used to identify both the sender and the intended receiver(s). The addresses are stored in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are intended for themselves and should be processed or should be ignored. A connection between a sender and receiver can be identified using an address pair (sender address, receiver address). Usually some address values have special meanings. An all –s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme.

-      Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance to translate a logical OP address specified by the application to an Ethernet hardware address. This is referred to as address mapping.

-      Routing. When systems are not directly connected intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. This way of connecting networks is called internetworking.

-      Detection of transmission error is necessary on networks which cannot guarantee error-free operation. In a common approach, CRCs of the data area are added to the end of packets, making it possible for the receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences and arranges somehow for retransmission.

-      Acknowledgements of correct reception of packets are required for connection-oriented communication.

-      Loss of information – timeouts and retries. Packets may be lost on the network or suffer from long delays. To cope with this, under some protocols, a sender may expect an acknowledgement of correct reception from the receiver within a certain amount of time. On timeouts, the sender must assume the packet was not received and retransmit it. In case of a permanently broken link, the retransmission has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an error.

from: http://searchnetworking.techtarget.com/definition/protocol

from: https://en.wikipedia.org/wiki/Communications_protocol


'Computer Science > Terminology' 카테고리의 다른 글

User Datagram Protocol (UDP)  (0) 2018.03.30
Transmission Control Protocol (TCP)  (0) 2018.03.30
Internet Protocol (IP)  (0) 2018.03.30
Packet  (0) 2018.03.30
Security in the Internet of Things  (0) 2018.03.30

Packet

A packet is the unit of data that is routed between an origin and a destination on the Internet or any other packet-switched network. When any file (e-mail message, HTML file, Graphics Interchange Format file, Uniform Resource Locator request, and so forth) is sent from one place to another on the Internet, the Transmission Control Protocol (TCP) layer of TCP/IP divides the file into “chunks” of an efficient size for routing. Each of these packets is separately numbered and includes that Internet address of the destination. The indicial packets for a given file may travel different routes through the Internet. When they have all arrived, they are reassembled into the original file (by the TCP layer at the receiving end).

A packet-switching scheme is an efficient way to handle transmissions on a connectionless network such as the Internet. An alternative scheme, circuit-switched, is used for networks allocated for voice connections. In circuit-switching, lines in the network are shared among many users as with packet-switching, but each connection requires the dedication of a particular path for the duration of the connection.

“Packet” and “datagram” are similar in meaning. A protocol similar to TCP, the User Datagram Protocol (UDP) uses the term datagram.

From: http://searchnetworking.techtarget.com/definition/packet


'Computer Science > Terminology' 카테고리의 다른 글

User Datagram Protocol (UDP)  (0) 2018.03.30
Transmission Control Protocol (TCP)  (0) 2018.03.30
Internet Protocol (IP)  (0) 2018.03.30
Protocol  (0) 2018.03.30
Security in the Internet of Things  (0) 2018.03.30

+ Recent posts