Packet switching
Template:Subpagez Packet switching is a technique for telecommunications networks in which information is broken into individual "envelopes" for transmission, and sent over shared facilities, rather than committing resources to all communications between the source and destination.
Datagram routing
In what was historically called the datagram paradigm, but now often simply routing, every packet may contain complete source and destination information such that the routers that forward it do not need to "maintain state" of previous exchanges between source and destination. The term flow is used to identify a source-destination association, perhaps with additional information such as information priority, that does not actually commit resources between the source and destination.
This is the basic architecture used in the Internet.
Virtual circuit
A variant of packet switching is the virtual circuit method, in which domain-local addresses, such as telephone numbers in the Public Switched Telephone Network are used to create temporary associations between source and destination addresses, and usually short identifiers that create short-lived state information in the switching devices. Like routing, virtual circuit technology still shares common communications media. X.25 networks were early packet-switched, virtual circuit services.
In contrast, circuit switching commits resources, not just state, to the association.
Hybrid methods
There are newer methods that combine aspects of packet and circuit switching, or at least packet and virtual circuit switching, such as Multi-Protocol Label Switching or paths set up with the Resource Reservation Protocol. Such methods do share media, perhaps with an automatic failover mechanism to change media in the result of failure, but endeavor to traffic for a particular source-destination pair on the same shared path through the duration of the association.
By doing so, the information in the path has constant latency imposed by speed-of-light and processing delays in the path. While these methods do not actually reserve resources, they usually are part of a admission control scheme that prevents assigning more traffic to a shared path than it can handle without significant queueing delays caused when packets cannot get to the path until other packets are sent. Queueing delays, in this context, refers to packets in different flows; it is perfectly appropriate to have systems that maintain packet ordering within the same flow.