Vendor

A vendor, also known as a supplier, is an individual or company that sells goods or services to someone else in the economic production chain.

Vendors are a part of the supply chain: the network of all the individuals, organizations, resources, activities and technology involved in the creation and sale of a product, from the delivery of source material from the supplier to the manufacturer, through to its eventual delivery to the end user.

Parts manufacturers are vendors of parts to other manufacturers that assemble the parts into something sold to wholesalers or retailers. Retailers are vendors of products to consumers. In information technology as well as in other industries, the term is commonly applied to suppliers of goods and services to other companies.


'Computer Science > Terminology' 카테고리의 다른 글

Cyclic Redundancy Check (CRC)  (0) 2018.03.30
Hypertext Markup Language (HTML)  (0) 2018.03.30
Router  (0) 2018.03.30
Access Point (AP)  (0) 2018.03.30
Transport Layer Security (TLS)  (0) 2018.03.30

Router

A router is a networking device that forwards data packets between computer networks. Routers perform the “traffic directing” functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until its destination node.

A router is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). When a data packet comes in on one of the lines, the router reads the address information in the packet to determine it sultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. This creates an overlay internetwork.

The most familiar type of outer are home and small office routers that simply pass data, such as web pages, email, IM, and videos between the home computers and the Internet. An example of a router would be the owner’s cable or DSL router, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone. Though routers are typically dedicated hardware devices, use of software-based routers has grown increasingly common.

Application

When multiple routers are used in interconnected networks, the routers exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections, such as copper cable, fibre optic, or wireless transmission. It also contains firm ware for different networking communications protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another.

Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnet addresses recorded in the router do not necessarily map directly to the physical interface conncetions.


'Computer Science > Terminology' 카테고리의 다른 글

Hypertext Markup Language (HTML)  (0) 2018.03.30
Vendor  (0) 2018.03.30
Access Point (AP)  (0) 2018.03.30
Transport Layer Security (TLS)  (0) 2018.03.30
Public Key Infrastructure (PKI)  (0) 2018.03.30

Access point (AP)

In a wireless local area network (WLAN), an access point is a station that transmits and receives data (sometimes referred to as a transceiver). An access point connects users to other users within the network an also can serve as the point of interconnection between the WLAN and a fixed wire network. Each access point can serve multiple users within a defined network area; as people move beyond the range of one access point, they are automatically handed over to the next one. A small WLAN may only require a single access point; the number required increases as a function of the number of network users and the physical size of the network.

From: http://searchmobilecomputing.techtarget.com/definition/access-point

Wireless access point

In computer networking, a wireless access point (WAP) is a networking hardware device that allows a Wi-Fi compliant device to connect to a wired network. The WAP usually connects to a router (viaa wired network) as a stand alone device, but it can also be an integral component of the router itself. A WAP is differentiated from a hotspot, which is the physical location where Wi-Fi access to a WLAN is available.

Introduction

Prior to wireless networks, setting up a computer network in business, home or school often required running many cables through walls and ceilings in order to deliver network access to all of the network-enabled devices in the building. With the creation of the wireless access point, network users are now able to add devices that access the network with few or no cables. A WAP normally connects directly to a wired Ethernet connection and the WAP then provides wireless connection using radio frequency links for other devices to utilize that wired connection. Most WAPs support the connection of multiple wireless devices to one wired connection. Modern WAPs are built to support a standard and receiving data using these radio frequencies. Those standards and the frequencies they use are defined by the IEEE. Most APs use IEEE 802.11 standards.


'Computer Science > Terminology' 카테고리의 다른 글

Vendor  (0) 2018.03.30
Router  (0) 2018.03.30
Transport Layer Security (TLS)  (0) 2018.03.30
Public Key Infrastructure (PKI)  (0) 2018.03.30
Certificate Revocation List (CRL)  (0) 2018.03.30

Transport Layer Security

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), both of which are frequently referred to as “SSL”, are cryptographic protocols that provide communication security over a computer network. Several versions of the protocols are in widespread use in applications such as web browsing, email, Internet faxing, instant messaging, and voice-over-IP (VoIP). Major web sites use TLS to secure all communications between their servers and web browsers.

The primary goal of the Transport Layer Security protocol is to provide privacy and data integrity between two communicating computer applications. When secured by TLS, connections between a client (e.g., a web browser) and a server (e.g., Wikipedia.org) have one or more of the following properties:

-      The connection is private because symmetric cryptography is used to encrypt the data transmitted. The keys for this symmetric encryption are generated uniquely for each connection and are based on a shared secret negotiated at the start of the session. The server and client negotiate the details of which encryption algorithm and cryptographic keys to use before the first byte of data is transmitted. The negotiation of a shared secret is both secure (the negotiated secret is unavailable to eavesdroppers and cannot be obtained, even by an attacker who places himself in the middle of the connection) are reliable (no attacker can modify the communications during the negotiation without being detected).

-      The identity of the communicating parties can be authenticated using public-key cryptography. This authentication can be made optional, but is generally required for at least one of the parties (typically the server).

-      The connection is reliable because each message transmitted includes a message integrity check using a message authentication code to prevent undetected loss or alteration of the data during transmission.

In addition to the properties above, careful configuration of TLS can provide additional privacy-related properties such as forward secrecy, ensuring that any future disclosure of encryption keys cannot be used to decrypt any TLS communication recorded in the past.

TLS supports many different methods for exchanging keys, encrypting data, and authenticating message integrity. As a result, secure configuration of TLS involves many configurable parameters, and not all choices provide all of the privacy-related properties described in the list above.

Attempts have been made to subvert aspects of the communications security that TLS seeks to provide and the protocol has been revised several times to address these security threats. Web browsers have also been revised by their developers to defend against potential security weaknesses after these were discovered.

The TLS protocol is composed of two layers: the TLS record protocol and the TLS handshake protocol.

Description

Client-server applications use the TLS protocol to communicate across a network in a way designed to prevent eavesdropping and tampering.

Since protocols can operate either with or without TLS (or SSL), it is necessary for the client to indicate to the server the setup of a TLS connection. There are two main ways of achieving this. One option is to use a different port number for TLS connections (for example, port 443 for HTTPS). The other is for the client to use a protocol-specific mechanism (for example, STARTTLS for mail and news protocols) to request that the server switch the connection to TLS.

Once the client and server have agreed to use TLS, they negotiate a stateful connection by using a handshaking procedure. During this handshaking, the client and server agree on various parameters used to establish the connection’s security:

-      The handshake begins when a client connects to a TLS-enabled server requesting a secure connection and presents a list of supported cipher suites.

-      From this list, the server picks a cipher and hash function that it also supports and notifies the client of the decision.

-      The server usually the sends back its identification in the form of a digital certificate. The certificate contains the server name, the trusted certificate authority (CA) and the server’s public encryption key.

-      The client confirms the validity of the certificate before proceeding.

-      To generate the session keys used for the secure connection, the client either:

n  Encrypts a random number with the server’s public key and sends the result to the server (which only the server should be able to decrypt with its private key); both parties then use the random number to generate a unique session key for subsequent encryption and decryption of data during the session

n  Uses Diffie-Hellman key exchange to securely generate a random and unique session key for encryption and decryption that has the additional property of forward secrecy: if the server’s private key is disclosed in future, it cannot be used to decrypt the current session, even if the session is intercepted and recorded by a third party.

This concludes the handshaking and begins the secured connection, which is encrypted and decrypted with the session key until the connection closes. If any one of the above steps fail, the TLS handshake fails, and the connection is not created.

TLS and SSL are defined as ‘operating over some reliable transport layer’, which places them as application layer protocol in the TCP/IP reference model and as presentation layer protocols in the OSI model. The protocols sued a handshake with an asymmetric cipher to establish cipher settings and a shared key for a session; the rest of the communication is encrypted using a symmetric cipher and the session key.

From: https://en.wikipedia.org/wiki/Transport_Layer_Security


'Computer Science > Terminology' 카테고리의 다른 글

Router  (0) 2018.03.30
Access Point (AP)  (0) 2018.03.30
Public Key Infrastructure (PKI)  (0) 2018.03.30
Certificate Revocation List (CRL)  (0) 2018.03.30
Certificate Authority (CA)  (0) 2018.03.30

Public key infrastructure

A public key infrastructure (PKI) is a set of roles, policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates and manage public-key encryption. The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.

In cryptography, a PKI is an arrangement that binds public keys with respective identities of entities (like persons and organizations). The binding is established through a process of registration and issuance of certificates at and by a certificate authority (CA). Depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision.

The PKI role that assures valid and correct registration is called registration authority (RA). An RA is responsible for accepting requests for digital certificates and authenticating the entity making the request. In a Microsoft PKI, a registration authority is usually called a subordinate CA.

An entity must be uniquely identifiable within each CA domain on the basis of information about that entity. A third-party validation authority (VA) can provide this entity information on behalf of the CA.

Design

Public key cryptography is a cryptographic technique that enables entities to securely communicate on an insecure public network, and reliably verify the identity of an entity via digital signatures.

A public key infrastructure (PKI) is a system for the creation, storage, and distribution of digital certificates which are used to verify that a particular public key belongs to a certainentity. The PKI creates digital certificates which map public keys to entities, securely stores these certificates in a central repository and revokes them if needed.

A PKI consists of:

-      A certificate authority (CA) that stores, issues and signs the digital certificates

-      A registration authority which verifies the identity of entities requesting their digital certificates to be stored at the CA

-      A central directory – i.e., a secure location in which to store and index keys

-      A certificate management system managing things like the access to stored certificates or the delivery of the certificates to be issued.

-      A certificate policy

From: https://en.wikipedia.org/wiki/Public_key_infrastructure


'Computer Science > Terminology' 카테고리의 다른 글

Access Point (AP)  (0) 2018.03.30
Transport Layer Security (TLS)  (0) 2018.03.30
Certificate Revocation List (CRL)  (0) 2018.03.30
Certificate Authority (CA)  (0) 2018.03.30
Communication Processor (CP)  (0) 2018.03.30

Certificate Revocation List (CRL)

A Certificate Revocation List (CRL) is a list of digital certificates that have been revoked by the issuing Certificate Authority (CA) before their scheduled expiration date and should no longer be trusted. CRLs are a type of blacklist and are used by various endpoints, including Web browsers, to verify whether a certificate is valid and trustworthy. Digital certificates are used in the encryption process to secure communications, most often by using the TLS/SSL protocol. The certificate, which is signed by the issuing Certificate Authority, also provides proof ofthe identity of the certificate owner.

When a Web browser makes a connection to a site using TLS, the Web server’s digital certificate is checked for anomalies or problems; part of this process involves checking that the certificate is not listed in a Certificate Revocation List. These checks are crucial steps in any certificate-based transaction because they allow a user to verify the identity of the owner of the site and discover whether the Certificate Authority still considers the digital certificate trust worthy.

The X.509 standard defines the format and semantics of a CRL for a public key infrastructure. Each entry in a Certificate Revocation List includes the serial number of the revoked certificate and the revocation date. The CRL file is signed by the Certificate Authority to prevent tampering. Optional information includes a time limit if the revocation applies for only a period of time and a reason for the revocation. CRLs contain certificates that have either been irreversibly revoked or that have been marked as temporarily invalid.


'Computer Science > Terminology' 카테고리의 다른 글

Transport Layer Security (TLS)  (0) 2018.03.30
Public Key Infrastructure (PKI)  (0) 2018.03.30
Certificate Authority (CA)  (0) 2018.03.30
Communication Processor (CP)  (0) 2018.03.30
Application Processors (AP)  (0) 2018.03.30

Certificate authority

In cryptography, a certificate authority or certification authority (CA) is an entity that issues digital certificates. A digital certificate certifies the ownership of public key by the named subject of the certificate. This allows others (relying parties) to rely upon signatures or on assertions made about the private key that corresponds to the certified public key. In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate. Many public-key infrastructure (PKI) schemes feature CAs.

Overview

Trusted certificates can be used to create secure connections to a server via the Internet. A certificate is essential in order to circumvent a malicious party which happens to be on the route to a target server which acts as if it were the target. Such a scenario is commonly referred to as a man-in-the-middle attack. The client uses the CA certificate to authenticate the CA signature on the server certificate, as part of the authorizations before launching a secure connection. Usually client software –for example, Browsers – include a set of trusted CA certificates. This makes sense, as many users need to trust their client software. A malicious or compromised client can skip any security check and still fool its users into believing otherwise.

The clients of a CA are server supervisors who call for a certificate that their servers will bestow to users. Commercial CA’s charge to issue certificates, and their customers anticipate the CA’s certificate to be contained within the majority of web browsers, so that safe connections to the certified servers work efficiently out-of-the-box. the quantity of internet browsers, other devices and application which trust a particular certificate authority is referred to as ubiquity. Mozilla, which is a non-profit business, issues several commercial CA certificates with its products. While Mozilla developed their own policy, the CA/Browser Forum developed similar guidelines for CA trust. A single CA certificate may be shared among multiple CAs or their resellers. A root CA certificate may be the base to issue multiple intermediate CA certificates with varying validation requirement.

In addition to commercial CAs, some non-profits issue digital certificates to the public without charge; a notable example is CAcert.

Large organization or government bodies may have their own PKIs (public key infrastructure), each containing their own CAs. Any site using self-signed certificates acts as its own CA.

Browser and other clients of sorts characteristically allow users to add or do away with CA certificates at will. While server certificate regularly last for a relatively short period, CAcertificates are further extended, so, for repeatedly visited servers, it is less error-prone importing and trusting the CA issued, rather than confirm asecurity exemption each time the server’s certificate is renewed.

Less often, trustworthy certificates are for encryption or signing messages. CAs dispenses end-user certificates too, which can be used with S/MIME. However, encryption entails the receiver’s public key and, since authors and receivers of encrypted messages apparently know one another, the usefulness of a trusted third party remains confined to the signature verification of messages sent to public mailing lists.

From: https://en.wikipedia.org/wiki/Certificate_authority


'Computer Science > Terminology' 카테고리의 다른 글

Public Key Infrastructure (PKI)  (0) 2018.03.30
Certificate Revocation List (CRL)  (0) 2018.03.30
Communication Processor (CP)  (0) 2018.03.30
Application Processors (AP)  (0) 2018.03.30
Hypertext Transfer Protocol (HTTP)  (0) 2018.03.30

Communication Processors (Networking)

The widespread perception of communication processors (CPs) is that they are general-purpose devices from a family of equipment types, including feeder multiplexers, packet assembler/disassemblers (PADs), terminal servers, and protocol converters. In the typical IBM mainframe environment, however, system administrators usually take a much narrower view, limiting the definition to FEPs, establishment controllers, and network gateway controllers.

Even though most mainframe central processors include parallel circuits and clocking mechanisms to handle I/O in a quasi-parallel fashion, they still consume a significant percentage of available CPU cycles for specialized, high-overhead, front-end applications. This has motivated system designers to find ways to offload these support processes onto less expensive resources. This motivation gave rise to the introduction of communications processor technology.

These special-purpose computers are designed to manage to complexities of computer communications such as protocol processing, data format conversion, data buffering and routing, error checking and correction, and network control. Depending on the environment, vender, or configuration, communications processors are commonly referred to as front-end processors (FEPs), local/remote concentrators or hubs, communication servers, gateway switches, intelligent routers, and controllers.

The market for communications processors is fully mature and dominated by one product, the IBM 3475 Communication Controller. Lately the term communications processor has been used increasingly to refer to network add-in boards for both PC and VMEbus system. The concept is the same - that these boards offload communications functions from the CPU.

Functions

The traffic management responsibilities of CPs include establishing, maintaining, and controlling any communications sessions between a host computer and dataterminal equipment (DTE), switching devices, other hosts (peer-to-peer), and intranet and Internet activities. Several venders have enhanced their CP offerings by adding features that include utilities for response time monitoring, event logging, terminal status indication, system administration, and diagnostic testing.

The original communication processors were relegated to mainframe systems. However, the advent of client/server computing and network topologies brought with them new, modernized devices for communications connectivity. Instead of a processor dedicated to servicing a single, central host, these newer communications servers offer front end for any number of processors connected via a local or enterprise-wide network.

Communications servers allow any piece of data communication equipment supporting the EIA-232 standard (terminals, modems, printers, hosts, and personal computers) to attach to a network. In some configurations, several operate concurrently in the same network, thereby providing an extensive range of communications services.

Communications processors, on the other hand, are generally classed by their range of features, flexibility, and capabilities. These include the number and types of host interfaces and protocols supported, aggregate bandwidth, and the types of terminal equipment and other devices supported. Creating an optimum configuration generally entails customizing the CP application software according to the functional and operational requirements of the system. This process, called System Generation (SYSGEN), prompts users through a series of questions about the desired configuration and estimates of data traffic. The result of this atrocity is a customized, resident program and associated tables that are automatically loaded and run each time the processor is initialized.

Communications processors support a number of protocols, including OSI, TCP/IP, IBM’s SNA/SDLC and 3270/3780 BSC, Ethernet, Fast Ethernet, token-ring, FDDI, ATM, X.25, frame relay, SONET, and Digital Equipment’s DECnet, and Local Area Transport (LAT) protocols. Anumber of CPs is able to operate in a multi-stack environment, supporting and translating multiple protocols concurrently.

From: http://what-when-how.com/networking/communications-processors-networking/


'Computer Science > Terminology' 카테고리의 다른 글

Certificate Revocation List (CRL)  (0) 2018.03.30
Certificate Authority (CA)  (0) 2018.03.30
Application Processors (AP)  (0) 2018.03.30
Hypertext Transfer Protocol (HTTP)  (0) 2018.03.30
Pre-Shared Key (PSK)  (0) 2018.03.30

Applications Processors – The Heart of the Smartphone

The term APU arose to identify the chips in smartphones that could run an operating system and application software. These capabilities made them more comparable to computers than early call phones. While earlier cell phones also contained a processor, it was a baseband processor tasked only with handling the digital processing related to the actual communication over the cellphone radios.

State-of-the-art APUs in today’s smartphones are the pinnacle of the system-on-chip (“SoC”) movement that began in the nineties. Integration has always been central to the micro electronics business. Combining the various functionality offered in multiple discrete integrated circuits became a natural evolution that would reduce component count and circuit board complexity reading to a system cost reduction – good for consumers and handset maker profit margins alike.

Nowadays, it is standard for your iPhone to be powered by an Apple A6 APU integrating multiple ARM cores along with several graphics processing units (GPU), large on-chip cache memories, memory controllers for communicating with off-chip DRAM, audio and video decoders (and encoders), USB host controllers, and other analog circuit functions.

It’s not just Apple. Every smartphone is powered by one of number of ARM (or similar but proprietary) microprocessor core-based chip offering quad-core processors and multi core GPUs.

For anyone wishing to explore further, the highest performance chips are Nvidia Tegra (actually five cores!). Qualcomm Snapdragon, S4, and the Exynos line from Samsung. Smartphones in the burgeoning low-cost category are most often powered by a MediaTek APU.

For anyone following more closely, you will notice that I left out the Texas Instruments OMAP (open multi-media application platform) line. That would be a serious omission since TI invented both the terminology and the product category that enticed a group of fast followers tojump into the race.

TI led the charge toward the SoC as the next step in microelectronic integration. Along the way, they were challenged by Intel who led an opposing camp suggesting that the integration of diverse functionality was better handled at the package level. This became known as system-in-package (or SiP). Intel’s premise was that chip-level manufacturing was better suited to focusing on the needs of different functions – memory versus digital processing versus analog circuits – that could be brought together by stacking individual integrated circuit dies during the package assembly phase.

Oddly enough, today’s smartphone APUs combine the best of both SoC and SiP technology. The main processor die includes a wide array of functions from CPU, GPU, static RAM, and analog interface blocks.

Tablet computers often employ a different approach, but smartphone APUs are most often packaged in a special configuration that allows a dynamic RAM (DRAM) package to be mounted on top. The technique is appropriately known as package-on-package (PoP). It provides a complete computing platform SiP that requires only a separate NAND flash storage to operate.


'Computer Science > Terminology' 카테고리의 다른 글

Certificate Authority (CA)  (0) 2018.03.30
Communication Processor (CP)  (0) 2018.03.30
Hypertext Transfer Protocol (HTTP)  (0) 2018.03.30
Pre-Shared Key (PSK)  (0) 2018.03.30
Handshaking  (0) 2018.03.30

Hypertext Transfer Protocol

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web.

Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). The first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC 2616 in 1999.

A later version, the successor HTTP/2, was standardized in 2015, then supported by major web browsers and already supported by major web servers.

Technical overview

HTTP functions as a request-response protocol in the client-server computing model. A web browser, for example, may be the client and an application running on a computer hosting a web site may be the server. The server, which provides resources such as HTML files and other content or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.

A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content.

HTTP is designed to permit intermediate network elements to improve or enable communications between clients andservers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them when possible to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers.

HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite. Its definition presumes an underlying and reliable transport layer protocol, and Transmission Control Protocol (TCP) is commonly used.

However HTTP can be adapted to use unreliable protocols such as User Datagram Protocol (UDP), for example in HTTPU and Simple Service Discovery Protocol (SSDP).

HTTP resources are identified and locatedon the network by uniform resourcelocators (URLs), using the uniform resource identifier (URI) schemes http and https. URIs and hyperlinks in Hypertext Markup Language (HTML)documents form inter-linked hypertext documents.

HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). In HTTP/1.0 a separate connection to the same server is made forevery resource request. HTTP/1.1 can reuse a connection multiple times to download images, scripts, stylesheets, etc. after the page has been delivered. HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead.

Requestmethods

HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing onthe server. The HTTP/1.0 specification defined the GET, POST and HEAD methods and the HTTP/1.1 specification added 5 new methods: OPTIONS, PUT, DELETE, TRACE and CONNECT. By being specified in these documents their semantics are well known and can be depended on. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined and this allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined 7 new methods and RFC 5789 specified the PATCH method.

-      GET: The GET method requests are presentation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) The W3C has published guidance principles on this distinction, saying, “Web application design should be informed by the above principles, but also by the relevant limitations”

-      HEAD: The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.

-      POST: The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URO. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a data-handling process; or an item to add to a database.

-      PUT: The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

 From: https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol


'Computer Science > Terminology' 카테고리의 다른 글

Communication Processor (CP)  (0) 2018.03.30
Application Processors (AP)  (0) 2018.03.30
Pre-Shared Key (PSK)  (0) 2018.03.30
Handshaking  (0) 2018.03.30
Programmable Logic Controller (PLC)  (0) 2018.03.30

+ Recent posts