Information security are the set of protective measures applied to information that traverses a telecommunications network or computer network, and continues into the computers themselves. There are a wide range of such measures, and not all are needed in every situation. There is no longer any sharp distinction between communications and information security. In a simpler world, once a user could gain access to a computer, all resources on that computer became available. As information threats grew, user rights were restricted on individual computers; a casual user of a public library no longer could install a new operating system. Now that many applications, invisibly to the user, may be executed using multiple computers, the distinction becomes minimally useful.
Still, it is reasonable to talk about the needs of the entire system. Governments may invest billions in communications intelligence organizations dedicated to breaking the strongest military and diplomatic communications of other governments. Each individual and organization has to address the question of whether a miscreant, whether an individual or a government, is likely to try to access one's own information and communications, and how much effort and expense the miscreant will use.
If one is a celebrity, the risks are greater.  In the cited example of hospital employees looking at an entertainer's records, however, the unauthorized access came from authorized access of the computer system, who had no justification to access those records. Restricting access, by health care workers, to a strict subset of records could limit the needed ability for legitimate access in an emergency. There are no simple answers.
Many years ago, Dennis Bransted, then with the U.S. National Institute of Standards and Technology coined the "5-S" mnemonic that described attributes of a secure communication. We have additional threats today, but this is an excellent start about deciding if a given application needs all of these properties, or if some are not needed. For example, it may be important that a stock market transaction be protected against modification, but, since it will soon be announced, secrecy is not terribly important.
- Sealed: cannot be modified without detection
- Sequenced: protected against loss, replaying, or reordering of messages
- Secret: protected against unauthorized disclosure
- Signed: confirmed as coming from the sender
- Stamped: the sender cannot deny sending and the receiver cannot deny receiving
The Security Process
One eternal truth about security is that it does not exist unless every action affecting a secure event can be audited. A reliable (often replicated) tamper-proof log is essential.
It is appropriate to search for vulnerabilities so they can be corrected, but techniques that have perfectly valid operational uses can also be preparation for attacks. See network reconnaissance.
As an example, CZ gives limited privileges to a user who has not established an account. In the CZ case, to gain additional privileges, one minimally gives one's name and has it verified. That conveys author privilege. Additional privilege is needed to be an editor or constable, and there appear to be additional system administrator and software modification privileges.
Articles, and text with them, have varying levels of object sensitivity. Anyone can read them. Any author can add or edit an unlocked version. Deletion of articles, however, requires constable privileges.
Note that privileges are described here as an administrative assignment, as are the restrictions on operations that can be performed on articles. These administrative controls precede interaction with the system.
To gain access beyond general reader, one begins by entering a user name known to CZ. Identification, in this security context, is the process of claiming an identity. Once that claim is made, it is subject to user authentication that confirms the user's identity.
While user authentication can be complex, any reasonably secure scheme uses at least two "factors": a purported identity, and a factor by which the system verifies the identity. In the CZ case, the second factor is "something you know", such as a password. In other cases, the second factor might be "something you are", such as a person with a biometrically verified fingerprint, or "something you have", such as a physical key or electronic security token.
After the user is authenticated, the system grants credentials. In an information system, a credential is a right to use a privilege. This usage of the term probably originated in the Kerberos authentication system. 
A user might have a privilege that cannot be exercised due to additional rules of credentialing. For example, a given user might need to be at a workstation within a physically secured perimeter to use some privilege; the credential to use that privilege will not be granted to the same user accessing the protected system from the public Internet.
The wise user, however, may want mutual authentication. Some excellent online banking systems not only require two-factor authentication from the user, but also to the user. Phishing is a security attack in which the miscreant convinces the user that a false server is the real one. A common phishing technique is to send an HTML-formatted electronic mail message, which appears to have a link to the real server. Unless the HTML code is examined, for purposes of this example, there is no easy way to distinguish a well-forged message, linking to the miscreant's server, from a real message.
If, however, server identification is in effect, the server must present a second factor to the user, such as a password the user assigned to the bank account from the bank's customer service department. More sophisticated techniques present digital certificates, which, minimally, use cryptography as a means of authentication. By adding server authentication of any sort, the overall authentication process now has at least three factors. Techniques of encrypting or all of the factors make the authentication even stronger.
Potentially, an unauthorized sender might inject secured packets into the informaton flow. Should the sender have analyzed the protection being given the content, but not the method by which packets are verified as authentically from the authorized source, such a violation should be detectable.
Many methods can be used for sender authentication, not all of them cryptographic in nature. Assuming the parties to the communication have synchronized time-of-day references, a false sender could be recognized by administrative means as simple as a transmission made outside working hours for the function. Since the working hours never appear in the secured flow, a pure interceptor of traffic could not know that test could be applied.
Only if he made a penetration into the computers verifying the times of service could he avoid detection.
Integrity services protect against unauthorized alteration of alteration. There are many ways that information can be changed. Consider several kinds of financial fraud: changing the amount on a funds transfer, preventing debits to an account, and replaying credits to an account. Protection against each is different.
When a record or message has atomic integrity, there is verification that it has not been altered. A cryptographic hash is created before transmission and further encrypted if content confidentiality is enabled. On reception, the receiver verifies atomic integrity if repeating the same cryptographic hash process produces the received hash value.
There may be additional levels of security, such as plausibility checks based on trusted timestamps, digital signatures, etc.
The basic mechanism of sequential integrity assurance is to put encrypted sequence numbers into each method, and, on reception, verify the sequence number sequence is correct, with exactly the same number of records as there are sequence numbers. Other measures exist, such as verifying that a cryptographic hash of the entire sequence, sent securely, can be recomputed exactly on the decrypted text. Independent control channels also can contain the sequence information.
When content confidentiality is in effect, unauthorized users cannot obtain the cleartext content of the message. Cryptographic means are the primary way to achieve this service, although pseudo-cryptographic methods such as frequency agility or spread spectrum radio can make it difficult to capture material even transmitted in the clear. Other means of achieving content confidentiality include physical protected distribution systems, or concealing the data using steganography.
Nonrepudiation deals with proof that a message was sent and that it was delivered. There have long been parallels in postal mail (e.g., proof of mailing and delivery receipt), as well as the service of legal documents.
In commercial applications, nonrepudiation mechanisms give a chance to avoid classic comments such as "the check is in the mail" or "we haven't received your payment."
There are technical nuances to nonrepudiation in communications systems. Authentication of the sender and recipient will be needed in any proof of their actions, but their identity will need, at the least, to be able to identify the message whose sending and delivery is in question.
In many cases, a digitally signed cryptographic hash of the record, and, if a multi-record transfer, will serve as a content identifier. It may be necessary for that message, or perhaps a header in an "envelope", to bear a trusted timestamp, digitally signed by a source of date and time information.
- A user of an information communication system (such as e-mail) cannot deny that they sent a message to another user; i.e. if sender repudiation was in place at a given company an employee could deny sending his boss an email saying "I quit" all he wants, but due to specific technology implemented on the mail server and/or mail client, it is provable that he sent the message.
Many attacks try to get the computer to do something for the Evildoer, perhaps give him or her data that he/she is not authorized to have (anyone got a use for a few hundred thousand credit card numbers? medical records?), or let him/her make bank withdrawals from other peoples' accounts, or launch some nuclear missiles, or whatever.
A denial of service attack (DoS) does not do that; it just tries to deny normal computer services to the authorized users. Many DoS attacks are flooding attacks; if I send you 10,000 emails, your normal email will likely not get through. But not all involve flooding; maybe I can construct a really evil mail message, deliberately breaking the rules for mail formats in such a way that your mail server or your mail-reading software will crash when it tries to process the beast. Either way, I'm denying you mail service.
In general, this is easier than other attacks, like trying to read your mail or produce forged mail that appears to be from you. Unfortunately, those aren't necessarily hard either, but that's another topic.
It is fairly common for attackers to take over a few tens of thousands of insecure machines. The "owned" machines are "zombies" and the network of them is a botnet. This is now a business; spammers rent time on botnets to send their rubbish. An outfit called "Russian Business Network" were the biggest vendor last I heard. The attackers search blocks of addresses used for broadband Internet, looking for vulnerable machines. Windows boxes are their favorite target; if you run Windows and do not do MS updates, then your machine is guaranteed to be taken over sooner or later.
Given a botnet, you can do a DDoS, a distributed denial of service attack. Have thousands of zombies all hammering away at some website you dislike. The server may crash, and even if it doesn't, normal web services will be disrupted.