This lecture considers secure communications in the Internet. It
reviews the need for security, and the principles of encryption,
integrity protection, and authentication of messages. It explains
the principles of operation of the Transport Layer Security Protocol
(TLS), version 1.3, and how it protects Internet traffic. And it
briefly reviews some of the issues around writing secure software.
The 1st part of this lecture discussed the need for security in
Internet communications. It reviews why end-to-end encryption and
message integrity protection are essential to protect Internet users
for eavesdropping, identity theft, fraud, and other attacks. And it
discusses some of the tensions and concerns that have been raised
about the provision of such protection.
Slides for part 1
In the last lecture, I discussed the behavior of TCP
and some issues around connection establishment.
One of these issues was the observation
that establishing a secure connection, using TLS,
was slower than establishing an insecure connection.
In this lecture, I want to talk more about TLS
and about security in general.
In this first part,
I'll talk about why security is important,
and why we need to secure communications.
Then, in part two,
I'll talk about the principle of secure communication
and the cryptographic techniques
that can be used to protect data.
Part three of the lecture will describe
some of the behavior of the transport layer security
protocol, that provides security for most Internet traffic.
And, finally, in part four,
I'll talk about some general issues around network security,
and how to write secure networks applications.
So why do we need secure communications?
Well, the fundamental problem
is that it's possible to eavesdrop on network traffic.
This can be done by wiretapping the network links
down which the data flows,
or it can be done by configuring the network routers
to save a copy of the packets they forward.
The result is that traffic passing across the network
can be monitored by third parties.
If you want to ensure that the data you send
across the network is private,
then that data needs to be encrypted somehow.
Similarly, network routers can modify
the packets they forward.
This means that the router can change the data
being delivered without the consent of the sender.
The sender cannot stop this happening.
But they can add some message integrity protection,
such as a digital signature,
to allow the receiver to detect and reject
messages that have been tampered with.
Finally, there are numerous devices in the network,
known as middle boxes,
that try to improve communication
by somehow interpreting or modifying the data being sent.
For example, we spoke about network address
translation in the last lecture
where a NAT router rewrites the addresses and ports
in TCP/IP headers to allow several machines
to share a single single IP address.
Other examples include network firewalls,
that monitor traffic and try and prevent bad traffic
from entering a network,
as well as the various accelerator devices
that try to improve the performance of TCP
connections running over satellite links.
If not carefully maintained,
these devices tend to lead to network ossification,
where they tend to limit the ability to
change network protocols.
A final rule of secure communications
is therefore to limit the ability of such devices to inspect
and act on the traffic,
so helping to ensure that the network
can continue to evolve.
A lot of different organizations monitor the network,
for many different reasons.
These include governments, intelligence agencies,
and law enforcement agencies.
For example, the police have to monitor the network
as part of their crime prevention activities;
domestic intelligence agencies inspect traffic
to protect against terrorism, or to monitor foreign targets;
and foreign intelligence agencies might try to
spy on domestic targets.
That this happens shouldn't be a surprise.
And are clearly good reasons for some of this monitoring.
Many people would agree, I think,
that targeted wiretaps on suspected criminals,
subject to appropriate oversight,
the need to obtain a warrant of some sort,
and when there's probable cause,
are probably not unreasonable.
Relatively few people would object
to actively monitoring the network traffic of those
actively suspected of being engaged in serious crimes,
terrorist activities, child abuse, and so on.
People differ on what crimes they consider serious,
or on the standards of probable cause,
or on the amount of oversight needed.
But all societies accept some degree of monitoring
and oversight of network traffic.
However, Edward Snowden showed that
some intelligence agencies, including,
but certainly not limited to the five eyes,
the UK, the US, Canada, Australia, and New Zealand,
were conducting pervasive monitoring of all network traffic.
Other governments are also known to conduct such monitoring.
The great firewall of China is a common example,
along with monitoring by Russia,
Iran, Saudi Arabia, and others.
Many felt that this indiscriminate monitoring
of all network traffic without probable cause or suspicion,
was a step too far.
In part, I think this came from distrust
of those governments, their motives,
and how they might use the data.
The people they were supposed to represent were unconvinced
that the monitoring was actually doing them good.
But, in part, there was also the realization
that if supposedly friendly governments
were monitoring traffic indiscriminately,
then so were others.
Even if I completely trust our government
to monitor Internet traffic only good reasons,
the fact that they're able to monitor that traffic
means that others are able to do so too.
And those others might not have my best interests at heart.
This led to a push to enable pervasive encryption,
to encrypt more and more of the traffic
crossing the Internet.
The most visible manifestation of this
is that most websites now use HTTPS
and encrypt their traffic.
But the spread of encryption has been wider than the web.
The result is that most Internet traffic
is now encrypted by default,
hindering, but not preventing, pervasive monitoring .
Governments and not the only organizations
to monitor network traffic, of course.
We've all contacted a business and been told that our
call may be monitored for quality and training purposes.
Some of this monitoring by businesses is necessary
for regulatory compliance.
Banking and insurance industries, for example,
require records to be kept in most cases, to prevent fraud.
There are good reasons for some of this monitoring.
Other aspects of monitoring and tracking by
businesses are perhaps less beneficial.
Targeted advertising and customer profiling is
frequently cited as problematic, for example.
Communication security measures, such as encryption,
can help reduce such unwanted monitoring,
though the effect is small, since this type of
monitoring and tracking is often delivered
by the sites we intentionally visit,
rather than by snooping on communications.
We also see network operators
monitoring traffic on the networks they operate.
Again, there are both beneficial,
and problematic, reasons for this.
Network operators monitor traffic
to understand how well their networks are operating,
and whether they're meeting their quality of service goals.
it's common, for example,
for network operators to inspect
the sequence and acknowledgement numbers
in the headers of TCP packets traversing their networks.
This lets them understand if packets are being lost,
or if the time taken for packets to traverse
the network is building up,
both of which are signs that the network
is becoming overloaded.
This helps the operators decide when to reroute traffic
onto less busy paths, or when to install
more network capacity to keep good performance.
And a few would argue that this sort of
monitoring is a problem.
On the other hand, operators can monitor to traffic
to profile what sites that customers are visiting.
This information could then be sold to advertisers,
or could be used to negatively influence
the performance at the traffic.
For example, an operator might choose to lower the
priority of Netflix traffic
for customers who haven't signed up
to their video streaming package.
Many people are less comfortable with such behaviors,
and communication security measures can limit
Finally, of course, are criminals and malicious users
that try to steal data and user credentials,
that try to perform identity theft,
or conduct other attacks.
Communication security clearly cannot prevent
all such attacks, but it can limit their scope
by limiting the amount of information that's available
and visible to those monitoring the networks.
As a result of these various attacks,
there are a range of measures that can be deployed
that can help to protect
privacy by encrypting network traffic.
Unfortunately, what makes this problem space challenging,
is that the mechanisms used to protect
against malicious attacks also prevent benign monitoring.
There's no known way to stop criminals
and malicious attackers from accessing private data
that doesn't also stopped legitimate law enforcement
from doing so, for example.
In addition to monitoring and observing data
as it traverses the network,
many organizations might also try to modify messages.
Governments and law enforcement, for example,
might require ISPs to censor,
or modify, DNS responses
to restrict access to certain sites.
They might require DNS responses to be modified
to indicate that certain sites don't exist,
or to change the addressing the DNS response
to direct users to a page indicating that the
content is blocked.
Alternatively, governments might require ISPs
and network operators to block or rewrite traffic
containing certain content.
As with government traffic monitoring,
there can be reasonable, and unreasonable,
reasons for governments to modify messages.
Many countries have widely accepted laws
about restricting hate speech,
blocking child pornography,
or preventing terrorism.
Part of the implementation of such laws
is often by modifying DNS responses
to limit access to certain sites.
The same techniques can, of course,
also be used to block other types of content,
or restrict other kinds of speech.
Businesses and network operators might also block
or modify contact.
The DNS server in a cafe, or a train,
that redirects you to a sign up page,
and asks asks for payment before letting you browse the web
on their Wi-Fi is an example.
Other examples might be services that filter spam
or block malicious attachments,
that enforce terms of service,
or that try to prevent copyright infringement.
And finally, of course, there are criminals,
and malicious users,
people modifying content to conduct phishing scams,
steal identity, mislead, and defraud.
And, again, what makes this problem space challenging
is that mechanisms that protect message integrity
against malicious attackers
also prevent benign modification.
For example, a recent development
in network security is DNS over HTTPS.
This is an approach to encrypting DNS traffic
that was designed to protect users from phishing attacks
where an attacker on the local networks
spoofs DNS responses to perform identity theft.
It does this successfully.
Unfortunately, some Internet service providers in the UK
intentionally spoofed DNS responses
to block access to sites hosting child abuse material,
as part of a government government mandated blocklist.
Encrypting DNS traffic using DNS over HTTPS
to protect, to prevent against, identity theft
unintentionally also prevented
the child abuse block list from working,
since both relied on the same vulnerability in DNS.
And again, this is an area, whether a difficult questions,
and it's not we have all the right answers.
The final reason for securing communications
relates to protocol ossification.
it's common for network operators to deploy middle boxes,
of various sorts, to monitor and modify traffic.
These can be devices such as NATS and firewalls,
traffic shapers, filters, or protocol accelerators.
And these middle boxes need to understand the traffic
they're observing or modifying.
For example, in order to translate IP addresses and ports,
a NAT needs to know the format of an IP packet,
and where the ports are located in the TCP and UDP header.
Equally, a traffic shaping device,
intended to limit the throughput of TCP connections
for a particular user,
needs to understand the congestion control
algorithm used by TCP,
otherwise how can it influence
the sending rate of a connection?
This means that the network becomes more complex.
It means that devices in the network no longer just look at
the IP headers and forward the packets
based on the destination address.
They also understand details of TCP and UDP,
and other protocols,
and observe inspect and modify those protocols too.
And this leads to a problem known as protocol ossification,
where it becomes difficult to change the protocols
running between the endpoints,
because doing so interacts poorly with middle boxes
that don't understand the new version of the Protocol.
For example, it'd be very difficult to change the format
of the TCP header now, even if we could
upgrade all the systems to support the new version,
because of all the NATs and firewalls
that would also need updating.
This protocol ossification,
where the network learns about the transport
and higher layer protocols,
effectively prevents those protocols from being upgraded,
and occurs because the network has visibility
into those protocols.
Encryption offers one way to prevent ossification.
The more of a protocol that's encrypted,
the easier it is to change that protocol,
since the encryption will have stopped middleboxes
from understanding or modifying the data.
There's a trade off, though,
between the ability to change end-to-end protocols
and the ability of the networks offer helpful features.
The more of a protocol that's encrypted,
the easier it is to change the protocol.
But the harder it is for middle boxes,
to provide help from the network.
The draft shown on the slide,
on "Long-term viability of protocol extension mechanisms",
talks about these issues further,
and talks about how to extend and modify protocols
and ensure that protocols remain changeable.
It'ss very much worth reading.
As we've seen there are good reasons to encrypt
and authenticate data.
Doing so helps to provide privacy,
it helps to prevent fraud,
and it helps to allow protocols to evolve
while avoiding network ossification.
Providing security in this way is a good thing,
but they're always trade offs,
and I've tried to highlight some of these.
In particular, it's always possible to find examples
where providing security to protect against some attacker
will prevent some beneficial monitoring or service.
There are no easy solutions here.
It's easy to argue that we must encrypt everything
to ensure privacy,
missing that this causes some real problems.
Equally, it's easy to argue that law enforcement
should have exceptional access to communications,
to help prevent terrorism and child abuse, for example,
missing, that there are very real risks that this will cause
serious other problems.
We need more dialogue between engineers,
protocol designers, network operators,
policymakers, and law enforcement,
to better understand the constraints and the concerns.
The "Keys Under Doormats" paper, linked from the slide,
talks about these issues in more detail,
and I very much encourage you to read it.
Finally, as more and more data is encrypted and protected,
we're also starting to see increasing discussion
of end system based content monitoring.
The argument here is that encryption is important
to prevent attacks by malicious users,
but that law enforcement need access to protect us.
But, since effective encryption prevents law enforcement
from monitoring traffic on the network,
then maybe they should be able to monitor the traffic
on the end systems, after it's traversed the network.
And there's a certain appeal to this.
If done correctly, the encryption provides
protection against a large class of attacks,
and correct implementation of end-system based monitoring
limits who can monitor traffic
to those with legitimate needs and legitimate authority.
And, in some cases that's an appropriate compromise.
It doesn't seem problematic for social networks
like Facebook,for example,
to support law enforcement in monitoring their network
to detect people sharing child abuse material.
as Apple found out when they announced that they were
to implement similar monitoring running on iPhones
for one-to-one and group iMessage chats,
the expectations around privacy,
law enforcement access, and abuse protection,
vary very much between social networks,
group communications, and public posts.
And the boundaries between these categories,
and what's acceptable in terms of monitoring
and protection and privacy,
can be very hard to distinguish.
And again, there are some difficult questions
relating to what type of privacy protection
and what type of monitoring is technically
possible to implement on end-systems,
and what's socially acceptable,
and what's desirable.
And the the paper on the slide,
"Bugs in our pockets",
talks about this issue in a lot more detail.
So that wraps up the discussion of why
secure communication is needed.
Network traffic is frequently monitored
by governments, businesses,
network operators, and malicious users.
Some of this monitoring is beneficial,
some of it less so.
In the following parts, I'll talk about
the technologies we can use to provide privacy,
to protect message integrity,
and to protect and prevent protocol ossification.
The 2nd part of the lecture reviews the principles of secure
communication. It describes the concepts behind symmetric, public-key,
and hybrid cryptography. It outlines techniques for message integrity
protection and authentication including cryptographic hash functions
and digital signatures. And it reviews the need for a public key
Slides for part 2
In this part, I want to talk
about some of the principles of secure
communication. I’ll talk about how we go
about ensuring confidentiality of messages as they
traverse the network.
About how we authenticate messages to ensure
that they're not modified in transit,
and about how we can go about
validating the identity of the participants in
So what are the goals of secure communication?
Well, we're trying to deliver a message
across the internet from a sender to a receiver.
In the process we want to avoid
eavesdropping on the message – we need
to encrypt it in order to provide
confidentiality, to make sure no one other
than the intended receiver can have access
to the content of the message.
We want to avoid tampering with the
message – we need to authenticate the
message to ensure that it's not modified
in transit by any of the devices
which are which are involved in the
delivery of that message.
And we want to avoid spoofing –
we want to somehow validate the identity
of the sender, so that the receiver
knows, and can be sure of who the message came from.
So how do we go about providing confidentiality?
Well unfortunately data traversing the network can
be read by any of the devices
on the path between the sender and the receiver.
It's possible to eavesdrop on packets as
they traverse the links that comprise the
network. And it's also possible to configure
the switches or routers to snoop on
the data as they're forwarding it between
the different links in the network.
The network operator can always do this.
They own the network;
they can configure the devices to save
a copy of the data if they choose to do so.
If the network's been compromised, maybe so can others.
If an attacker can break
into the routers, for example, there's nothing
stopping them saving the data, redirecting copies
of data traversing the network to some other location.
If the data can always be read,
how do we provide confidentiality?
Well, we use encryption to make sure
that the data is useless if it's
intercepted or copied. We can't stop an
attacker, or the network operator, from reading
our data. But we can make sure
that they can't make sense of it
if they do read it.
There are two basic approaches to providing encryption.
The first is called symmetric cryptography.
Algorithms such as the Advanced Encryption Standard, AES.
The other approach is what's known as
public key cryptography.
Algorithm such as the
Diffie-Hellman algorithm, the RSA algorithm, and elliptic
They have quite different properties and are
used in different situations. I’ll talk about
the details and the differences between them in a minute.
Both of them are based on some
fairly complex mathematics. I'm not going to
attempt to describe how that works.
What's important is not the details of
the maths. But what are their properties,
what behaviours do they provide, and how
do they help us secure data as it traverses the network?
So we’ll start with the idea of symmetric cryptography.
The idea of symmetric encryption is that
it can convert plain text into cipher
text with the aid of a key.
If you have, for example, the plain
text as we see on the top-right
of the slide, and we pass it
through the encryption algorithm, in this case,
the AES Advanced Encryption Algorithm, with the
aid of an encryption key, we get
a blob of encrypted text as we
see it in the middle.
If we pass that encrypted text through
the inverse algorithm, the decryption algorithm,
using the same key, then we get
the original text back out.
The point is that a single secret
key controls both the encryption and the
decryption process. The key used to encrypt
is the same as the key used
Now, provided the key is kept secret.
And it's known only to the sender
and receiver. This can be very secure,
and it can be very fast.
Symmetric algorithms such as AES can encrypt
and decrypt many gigabits per second.
This makes them very suitable for Internet
communications because they don't slow down the
communications, while still providing security.
There are a wide range of different
symmetric encryption algorithms, probably the most widely
used is the US Advanced Encryption Standard, AES.
The AES algorithm was developed as part
of the output of an open competition,
run by the US National Institute of
Standards, and it's actually a Dutch algorithm
known as Rijndael.
Importantly, the AES algorithm, the Rijndael algorithm,
is public and the security of the
algorithm depends only on keeping the key
secret, not on keeping the algorithm itself secret.
The link on the slide is a
pointer to the specification for the algorithm,
and there’s a large amount of open
source code which implements it.
The problem of symmetric cryptography is that
you need to keep the key secret.
If anyone other than the sender and
the receiver know the key, then the
security of the encryption fails.
The question then, is how do you
security distribute the key? If you want
to exchange message a secure message with
someone I know well, then this is
straightforward. I can meet them in person,
give them the key, and ensure that
no one else can eavesdrop on that communication.
The problem comes when I'm trying to
communicate securely with someone where I can't
meet them in person.
How do I securely get a key
from an Internet shopping site, for example?
The only means of communication. I have
is over the Internet. And if I
send the key over the Internet,
someone can eavesdrop on the key,
and that gives them the ability to
decrypt our communications and breaks the security.
The solution to this is an approach
known as public key cryptography.
public key cryptography, like symmetric cryptography,
is used to convert a plain text
message into an encrypted form. The difference,
though, is that there are two different
keys, and the key used to encrypt
the message, and the key to decrypt
the message are different
The keys come in pairs. The two
halves of the pair are known as
the public key and the private key.
Importantly, a message which is encrypted using
one of those keys can only be
decrypted using the other key. If the
message is encrypted with the public key,
for example, then only the private key
can decrypt that message.
As you might expect from the names.
The idea is that you keep the
private key from the key pair secret,
and you make the public key as
public as is possible.
You publish it in the phone book,
you put it on your webpage,
you write it on your business card,
and you make sure everybody knows that
this is your public key.
In order to send you a message,
someone looks up your public key and
uses that to encrypt the message.
Once the message has been encrypted using
a particular public key, the only thing
which can decrypt it is the corresponding
private key. And since the private key
has been kept private, you're the only
one who can receive the message.
This solves the key distribution problem.
Provided you can look up the appropriate
public key for the receiver in a directory,
and you can trust that the receiver
has kept their private key secret,
then you use their public key to
encrypt the message, and you know that
they're the only one who can decrypt it.
This allows Internet shopping sites, and the
like, to work. If I wish to
buy something from Amazon, I look up
the key for Amazon in a directory,
use that to encrypt the message I'm
sending to Amazon, and I know that
they're the only ones that can decrypt it.
The problem with public key cryptography is
that it’s very slow. The public key
algorithms such as the Diffie-Hellman algorithm,
the RSA algorithm,
and the elliptic curve algorithms, work millions
of times slower than symmetric encryption algorithms.
The result is that they’re too slow
to use for any realistic amount of
communication. The performance just isn't there.
Accordingly, modern communications use what's known as
hybrid cryptography, where they use a combination
of both public key and symmetric cryptography.
This provides both security and speed.
The way this works is that the
sender and receiver use public key cryptography,
which is very slow, to exchange a
small amount of information.
That information is then used as the
key for the symmetric encryption algorithm,
which is very fast.
In detail, the sender chooses a random
value, that we’ll call Ks, which will
be used as the key for the symmetric encryption.
The sender then looks up the receiver’s
public key, Kpub, uses it to encrypt
Ks and sends the result to the receiver.
The receiver uses its corresponding private key,
Kpriv, to decrypt the message and retrieve Ks.
This securely transfers Ks, the key for
the symmetric encryption algorithm, from the sender
to the receiver.
Doing this using public key encryption is
very slow, but the key for the
symmetric encryption, Ks, is very small,
so the fact it's very slow doesn't matter.
The sender, then uses that key,
Ks, to encrypt future messages using symmetric
cryptography, for example, using the AES algorithm.
The receiver also has Ks, which it
exchanged using the public key encryption,
and can use that to decrypt the messages.
Symmetric cryptography is very fast, so the
performance of the communication, once it's got
started, is very quick, but it requires
the key to be exchanged securely.
The public key algorithm, which is slow,
is used to securely exchange the key.
The result is something which achieves both
confidentiality, and solves the key distribution problem,
and also achieves good performance.
Encryption gives you confidentiality of data and
makes sure that no one can eavesdrop
on the messages being sent from the
sender to the receiver.
We also, though, need to verify the
identity of the sender, and make sure
that messages haven't been modified in transit.
In order to do this, we generate
a digital signature to authenticate our messages.
And the receiver can then validate that
signature, check the signature, to make sure
they came from the expected sender.
The digital signature relies on a combination
of public key cryptography,
and a cryptographic hash algorithm.
So first of all, what is a cryptographic hash?
A cryptographic hash function is a function
that takes some arbitrary length input and
produces a fixed length output hash that
somehow represents that input.
For example, at the top of the
slide, we see some input text going
through a hash algorithm, known as SHA256,
that produces the fixed length output block
you see on the right.
A cryptographic hash algorithm has four fundamental
properties. The first is that every input
will generate a different output, and the
slightest change to the input will change
the output value.
The second is that it should be
infeasible to give to find two inputs
that gives the same output.
The third is that calculating the hash
itself should be fast, and going from
input to output should happen very quickly.
And the fourth, and perhaps most important,
is that reversing a hash should be
infeasible. If you're only given the output,
there should be no way of finding
out what the inputs was.
A cryptographic hash therefor acts as a
unique fingerprint for the input data.
It provides a short output, that uniquely
identifies a given message.
There are many different cryptographic hash algorithms.
The current recommendation is the SHA256 over
specified by the IETF in RFC 6234.
There are a number of older algorithms,
such MD5 and SHA1, which you may
hear about, but these all have known
security flaws and are not recommended for use.
So how can we use a cryptographic
hash to help build a digital signature?
Well, in order to do that,
you take the message you wish to
send, and you calculate a cryptographic hash
of that message.
The sender that encrypts that hash with
their private key. Now the private key
is known only to the sender,
so they're the only one who can
encrypt that message.
But the thing which would decrypt it
is the sender’s public key, which is
available to everybody. Encrypting the hash with
the sender’s private key doesn't provide any
confidentiality, because anyone can decrypt the message
using the public key.
What it does do though, provided the
sender can be trusted to keep its
private key private, is demonstrate that the
sender must have encrypted the hash.
Since the hash is a fingerprint of
the message, this means that the sender
must have generated the original message.
The sender then attaches the encrypted hash
to the message, forming the digital signature.
The message, and its digital signature,
are then encrypted and sent to the
receiver using hybrid encryption.
When the message arrives at the receiver,
the receiver can verify the signature.
To do this, it first decrypt that
the message and its digital signature.
The receiver then takes the message itself,
and calculates its cryptographic hash.
Having done that, it takes the digital
signature, looks up the sender’s public key,
and uses that to decrypt the digital
signature to retrieve the original
cryptographic hash that was in the message.
It compares the hash, which has sent
in the message as part of the
digital signature, with the cryptographic hash it
If the two match, then it knows
the messages is authentic and has been
unmodified, provided is trusts the sender to
have kept its private key private.
If the hash of the message it
calculated, and the hash that was sent
in the digital signature, don't match then
it knows that somehow the message has
been modified in transit.
Public Key Encryption is therefore one of
the fundamental building blocks of a secure network.
It allows us to send a message
to a recipient securely, even if we've
not met that recipient, and be sure
that they're the only one who’ll be
able to decrypt that message. And it
allows us to use digital signatures to
verify that messages have not been modified
The security of public key encryption,
though, depends on knowing which public key
corresponds to a particular receiver.
There are three ways you can know
this. The first is that the receiver
gives you their key in person.
The second is that the receiver sent
you their key, but the message in
which they send it is authenticated by
someone you trust.
That is, there’s a digital signature in
the message, signed by someone who's key
already have, that authenticates that this message
is from who it claims to be from.
The third is that someone you trust
gives you the receivers key.
In the Internet, the role of someone
you trust is often played by an
organisation known as a certificate authority,
as part of a public key infrastructure.
The role of a certificate authority is
to validate the identity of potential senders.
The certificate authority checks the identity of
a potential sender, and then adds a
digital signature to the sender’s public key
to indicate that it's done so.
If a receiver trusts the public key
infrastructure, trusts the certificate authority, then it
can verify that digital signature, added by
the certificate authority, to confirm the identity
of the sender.
These mechanisms, symmetric and public key encryption,
and digital signatures, allow us to provide
confidentiality for communication over the Internet that
performs well and is secure.
They allow us to authenticate messages,
and demonstrate that they've not been modified in transit.
And they allow us to validate the identity of senders
of those messages.
The 3rd part of the lecture describes the operation of the Transport
Layer Security Protocol (TLS) v1.3; one of the key security protocols
used in the Internet.
Slides for part 3
In previous parts of this lecture I
spoke about network security in general terms.
In part one, I discussed why security
is needed in order to protect Internet communications,
and in part two, I spoke about
how security is provided in outline.
I spoke about the different types of
encryption, public key and symmetric,
the use of hybrid encryption, in order
to improve performance while still maintaining security,
and the ideas of digital signatures and
public key infrastructure.
In this third part of the lecture,
I want to move on to talk
about Internet security in specific terms.
I want to talk about the Transport
Layer Security protocol, TLS version 1.3
I’ll begin by introducing what is TLS,
talking about conceptually what role it performs
in the network stack. And I'll talk
through some of the details of TLS.
I'll talk about the TLS handshake protocol,
that's used to establish TLS connections.
The record protocol, that's used to exchange
data. The 0-RTT extension, that reduces connection
setup times. And finally, I'll talk about
some of the limitations of TLS.
As we saw in some of the
earlier lectures, TCP connections are not secure
Neither the TCP headers, nor the IP
headers, nor the data they transfer are
encrypted or authenticated in any way.
Data sent in a TCP connection is
not confidential. It can be observed by
governments, businesses, network operators, criminals,
or malicious users.
Similarly, the data is not authenticated.
Anyone who's able to access the network
connections, or the routers over which the
data flows, is able to modify that
data. And the sender and the receiver
will not be able to tell that
such modifications have been performed.
In order to provide security for data
going across a TCP connection, we need
to run some sort of additional security
protocol within that TCP connection to protect
The way this is typically done in
the Internet, is using a protocol called
the Transport Layer Security protocol.
The latest version of this is TLS
1.3 and it's used to encrypt and
authenticate data that is carried within a
The official specification for TLS 1.3 is
RFC 8446, which was published by the
IETF in the last couple of years.
The TLS specification is not a simple
document to read.
In part, this is because it's solving
a difficult problem. Providing security over the
top of an insecure connection, a TCP
connection, is a complex challenge, and TLS
has to define the number of complex
mechanisms in order to provide that security.
In other part, the complexity comes because
TLS is an old protocol.
The latest versions of TLS have to
be backwards compatible, not only with previous
versions of TLS as specified, but with
previous implementation problems, and bugs in the
TLS specification and in its implementations
The protocol designers have done a good
job, though. TLS version 1.3 is smaller,
faster, and simpler than previous versions of
TLS, and it's also more secure.
The slide lists four blog posts which
perfect more information about TLS. The first
one is an introduction to TLS 1.3
from the IETF. This was written by
the TLS working group chairs, and introduces
the new features in the protocol.
The second, from CloudFlare, is a detailed
look at what's new in TLS 1.3,
as compared to previous versions of TLS.
It talks about some of the advantages
of TLS 1.3, and how it improves
security, and reduces the connection set up times.
The third of these, from David Wong,
attempts to redraw the TLS specification in
a way that makes it easier to
read. This is a copy of RFC
8446, the TLS specification, with the diagrams
redrawn in an easier to read way,
and with explanatory videos and comments added
to make it easier to follow.
The final post is the most detailed.
It's an annotated packet capture showing the
details of a TLS connection.
This walks through the TLS connection establishment
handshake, byte by byte, labelling each byte
with reference to the specification to explain
exactly what it means, and how the
I encourage you to review these four
blog posts. They give a nice complement
to the material I'll talk about in
the rest of this lecture, introducing how
TLS 1.3 works.
So what's the goal of TLS 1.3?
Well, given an existing connection, that's capable
of delivering data reliably and in the
order it was sent, but is insecure,
TLS 1.3 aims to add security.
That is given a TCP connection,
it seems to add authentication, confidentiality,
and integrity protection to the data sent
over that connection.
In terms of authentication, it uses public
key cryptography, and a public key infrastructure,
in order to verify the identity of
the server to which the connection is made.
That is, the client can always verify
that it's talking to the desired server.
In addition, it provides optional authentication for
the client, to allow the server to
verify the identity of the client.
Once the connection has been established,
and verified to be correct, TLS provides
confidentiality for data sent across that connection.
It uses hybrid encryption schemes to provide
good performance, while still providing a strong
amount of security.
Finally, TLS authenticates data sent across the
connection, to provide integrity protection. It's not
possible for an attacker to modify data
sent across a TLS connection without that
modification being detectable by the endpoints.
How does TLS 1.3 work?
Well, first of all, a TCP connection
must be established. TLS is not a
transport protocol itself, and it relies on
an underlying TCP connection in order to
Once the TCP connection has been established,
TLS runs within that connection.
There are two parts to a TLS
connection. It begins with a handshake protocol,
and then proceeds with a record protocol.
The goal of the handshake protocol,
at the beginning of the connection,
is to authenticate the endpoints and agree
on what encryption keys to use.
Once this is completed, TLS switches to
running the record protocol, which lets endpoints
exchange authenticated and encrypted blocks of data
over the connection.
TLS turns the TCP byte stream into
a series of records. It provides framing,
delivers data block by block, each block
being encrypted and authenticated to ensure that
the data being sent in that block
is confidential, and arrives unmodified.
A secure connection over the Internet starts
up establishing a TCP connection as normal.
The client connects to the server,
sending a SYN packet, along with its
initial sequence number.
The server response with the SYN-ACK,
acknowledging the client’s initial sequence number,
and providing the server’s initial sequence number.
And then the client responsive with an
ACK packet, acknowledging that packet from the server.
This sets up a TCP connection.
Immediately following that, the TLS handshake starts,
running within the TCP connection itself.
The TLS client sends a TLS ClientHello
message to a server immediately following the
final ACK of the TCP handshake.
The server responds to that with a
TLS ServerHello message, and then the client
responds with a TLS Finished message.
This concludes the handshake, and carries the
first block of secure data. Following this,
the client and the server switch to
running the TLS record protocol over the
TCP connection, and exchange further secure data blocks.
As can be seen the TLS handshake
adds an additional round trip time to
the connection establishment.
At the start of the connection,
there's an initial round trip time while
TCP connection is set up.
And then this is followed by an
additional round trip, while the TLS connection
and the security parameters are negotiated,
before the data can be set.
There's a minimum of two round trip
times from the start of the TCP
connection to the conclusion of the TLS
handshake and the first secure data segment
The first part of the TLS handshake
is the ClientHello message. This is sent
from the client to the server,
and begins the negotiation of the security parameters.
The ClientHello message does three things.
It's indicates the version TLS that is
to be used. It indicates the cryptographic
algorithms that the client supports, and provides
its initial keying material. And it indicates
the name of the server to which
the client is connecting.
You may wonder why the ClientHello message
needs to indicate server name, given that
it's running over a TCP connection that's
just been established to that server.
The reason for this, is that TLS
is often used with web hosting,
and it's common for web servers to
host more than one website,
so the server name provided in the
TLS ClientHello indicates which of the sites,
which are accessible over that TCP connection,
the TLS message is trying to establish
a connection, establish a secure connection, to.
The ClientHello message also indicates which version
of TLS is to be used.
What you would expect to happen here,
is that it would indicate that it
wishes to use TLS 1.3.
What actually happens, though, is that the
ClientHello message includes a version number indicating
that it wants to use TLS version
1.2, the previous version of TLS.
The ClientHello message includes an optional set
of extension headers, and one of those
extension headers includes an extension which says
“actually I’m really TLS version 1.3”.
The reason the version negotiation happens in
such a weird way, specifying an old
version of TLS in the version field,
and using an extension to indicate the
Is because there are too many middle
boxes, too many devices which try to
inspect TLS traffic in the network,
and which fail if the version number changes.
The protocol has become ossified.
We waited too long between versions of TLS.
Too many devices were deployed, to many
endpoints were deployed, which only understood version 1.2
and which didn't correctly support the version
negotiation. And then, when it came to
deploying a new version, and people tried
with early versions of TLS to just
change the version number to 1.3,
is was found that those new versions
didn't support the change.
The result was that connections that indicated
TLS version 1.3 in the header would
tend to fail,
whereas those that pretended to be TLS
version 1.2, using an extension header to
upgrade the version number, would work through
those middleboxes, and the connection could succeed
and proceed with the new version.
The ClientHello message is the first part
of the connection setup handshake. It doesn't
carry any new data.
Following the ClientHello, the server responds with
a ServerHello message.
The ServerHello message also indicates the version
of TLS which is to be used
and, like the ClientHello, it indicates that
the version is actually TLS version 1.2
and includes an extension header to say
that it’s really a TLS 1.3 connection
that's being established
In addition to the version negotiation.
The TLS ServerHello includes the cryptographic algorithms
selected by the server, which are a
subset of the set suggested by the client.
That is, the client suggests the cryptographic
algorithms which it supports, and the server
looks at those, finds the subset of
them which are acceptable to it,
picks one of them, and includes that
in its response.
The ServerHello message also includes the server’s
public key, and a digital signature which
can be used to verify the identity
of the server.
Like the ClientHello, it doesn't include any data.
Finally, the TLS handshake concludes with a
Finished message, which flows from the client
to the server. The TLS Finished message
includes the clients public key and optionally,
it includes a certificate which is used
to authenticate the client to the server.
The TLS Finished message concludes the connection
In addition to the connection setup,
it may therefore include the first part
of application data that is sent from
the client to the server.
TLS uses the ephemeral elliptic curve Diffie-Hellman
key exchange algorithm in order to derive
the keys used for the symmetric encryption.
The client and the server exchange that
public keys, as part of the connection
setup handshake, and they then combine those
two public keys to derive the key
that's used for the symmetric cryptography.
The maths of how this works is
complex. I'm not going to attempt to
describe it here.
What's important though, is that the symmetric
key is never exchanged over the wire.
The client and the server only exchange
their public keys, and the symmetric key
is derived from those.
A TLS server provides a certificate that
allows the client to verify its identity
as part of the ServerHello message.
The client can optionally provide this information
along with its Finished message.
Result is that the client can always
verify the identity of the server,
and the server can optionally verify the
identity of the client.
The choice of encryption algorithm is driven
by the client, which provides the list
of the symmetric encryption algorithms that it
supports as part of its ClientHello message.
The server picks from these, and replies
in its ServerHello.
The usual result is that either the
Advanced Encryption Standard, AES, or the ChaCha20
symmetric encryption algorithm is chosen.
Once the TLS connection establishment protocol,
the handshake protocol, has completed the TLS
record protocol starts. The record protocol allows
the client and the server to exchange
records of data over the TCP connection.
Each record can contain up to two
to the power 14 bytes of data,
and is both encrypted and authenticated.
Records of data have a sequence number,
and they are delivered reliably, securely,
and in the order in which they
The underlying TCP connection does not preserve
record boundaries. TLS adds framing to the
connection so that it does so,
and reading from a TLS connection will
block until a complete record of data
A TLS connection usually uses the same
encryption key to protect data for the
entire connection. However, in principle, it can
renegotiate encryption keys between records, if there's
a need to change the encryption key
partway through a connection.
The TLS record protocol allows the client
and the server to exchange records,
to send and receive data as they
Once they finish doing so, they close
the connection, which closes the underlying TCP connection.
TLS 1.3 usually takes one round trip
time to establish the connection after the
TCP connection set up.
That is, there's the TCP SYN,
SYN-ACK, ACK handshake to establish the TCP
connection, and then an additional round trip
time for the TLS ClientHello, ServerHello,
However, if the client and the server
have previously communicated, TLS 1.3 allows them
to reuse some of the connection setup
parameters, and re-use the same encryption key.
The way this works is that the
server can send an additional encryption key
as part of its ServerHello message,
and the client can remember that key,
and use it the next time it
connects to the server. This is known
as a pre-shared key.
When the client next connects to that
server, it sends its ClientHello message as
normal. However, in addition to that ClientHello
message, it can also include some data,
and that data is encrypted using the
The ServerHello also proceeds as normal.
But again, can contain data encrypted using
the pre-shared key, and sent in reply
to the client, to the data included
in the ClientHello message.
The use of the pre-shared key therefore
allows the client and the server to
exchange data along with the initial connection
setup handshake. It allows data to be
exchanged within zero RTTs of the connection
set up, as part of the first
This extension is therefore known as the
0-RTT mode of TLS 1.3.
The 0-RTT mode is useful, because it
allows connections to start sending data much
earlier. It removes one round trip times
worth of latency. However, it has a limitation.
The limitation is that, unlike the record
packets which contain a sequence number,
TLS ClientHello and ServerHello messages don't contain
a sequence number.
A consequence of this, is that data
sent as part of a ClientHello,
or a ServerHello, may be duplicated,
and TLS has no way of stopping this.
If you're writing an application that uses
TLS in 0-RTT mode you need to
be careful, and only send what's known
as idempotent data,
data where it doesn't matter if that
data is delivered more than once to
the server, in the 0-RTT packets.
Data that is sent after the first
round trip time has concluded, as part
of the regular TLS connection, doesn't suffer
from this problem, and is only ever
delivered to the application once.
A TLS connection is secure, but it
has a number of limitations.
TLS operates within a TCP connection.
A consequence of this, is that the
IP addresses and the TCP port numbers
are not protected. This exposes information about
who is communicating, and what application is
Further, the TLS ClientHello message includes the
server name, but doesn't encrypt that.
This exposes the host name of the
server to which the connection is being
made, and may be a significant privacy leak.
An extension, known as Encrypted Server Name
Indication, is under development, but this is
not finished yet, and there are some
concerns that it may be very difficult
TLS also relies on a public key
infrastructure to validate the keys, and to
verify the identity of clients and servers.
There are some significant concerns about the
trustworthiness this public key infrastructure.
The reasons for this are not that
the cryptographic algorithms or the mechanisms are
insecure, they’re that the browsers tend to
trust a very large range of certificate authorities,
and it's not clear to which extent all of these certificate
authorities are actually trustworthy.
The final limitation of TLS is that
the 0-RTT extension may deliver data more than once.
0-RTT is a very useful extension,
because it allows data to be delivered
with low latency at the start of
the connection, but it runs the risk
that the data is delivered multiple times,
so must be used with care.
That concludes the discussion TLS. I spoke
about what is TLS. I've talked about
the TLS handshake protocol, that establishes the
connection using the ClientHello, ServerHello,
and Finished messages,
and that agrees the appropriate cryptographic parameters.
And I spoke about the TLS record
protocol, which is used to actually exchange the data.
The TLS 0-RTT extension allows for faster
data transfer at the beginning of the
connection, but comes with some risks of
data replay attack. Finally, I spoke about
some of the limitations of TLS.
The TLS protocol has actually been wildly
successful. It's used to secure all the
traffic sent over the web. And when
used correctly, is very much a secure
protocol, that performs very well.
In the final part of the lecture,
I'll move on from talking about the details of the
cryptographic mechanisms, and the transport protocols,
to talk about some of the issues with writing
The final part of the lecture discusses systems aspects of providing
secure communication. It reviews the need for end-to-end security to
protect communications. It discusses the robustness principle, and
its implications for the design on input parsers and other aspects
of networked systems. And it briefly reviews some of the challenges
in writing secure code.
Slides for part 4
In the previous parts, I’ve spoken about
the general principles underlying secure communication,
and about the Transport Layer Security protocol,
TLS 1.3, that protects most Internet communications.
In this final part of the lecture,
I want to raise some issues to
consider when developing secure networked applications.
In particular, I want to discuss the
need for end-to-end security, and the problems
of making secure communication in the presence
of content distribution networks, servers, and middleboxes.
I want to talk about the robustness
principle, and the difficulty in designing and
building networked applications. And I want to
talk about the need to carefully validate
input data, and part of the issues
around writing secure code.
For communication to be secure, it must
That is, the secure communication must run
between the initial sender and the final
recipient, and the message must not be
decrypted or lose integrity protection at any
point along the path.
That is harder to arrange than you
If the communication is between a client
and a server located in a data
centre, it’s easy to understand what is
the client endpoint. It’s the phone,
tablet, or laptop on which the application
making the request is running. What is
the endpoint in the data centre though?
Does the secure connection terminate at the
load balancing device at the entrance to
the data centre, that chooses which of
the many possible servers responds to the
request? If so, does that load balancer
make a secure onward connection to the
back-end server, or is the connection unprotected
within the data centre?
If the secure connection passes through the
load balancer and terminates on the back-end
server, are the connections between the back-end
servers and the databases, compute servers,
and storage servers in other parts of
the data centre secure? And, once the
request has been handled, how is the
data protected once it’s stored in the
What is your threat model? Are you
concerned about protecting your communication as it
traverses the wide area network between your
client and the data centre? Or are
you also concerned with protecting communications within
the data centre? If you’re concerned about
communications and data storage within the data
centre, are you trying to protect against
other tenants of the data centre? Or
against malicious users that may have compromised
the data centre infrastructure? Or against the
data centre operator?
Similar issues arise with content distribution networks.
CDNs, such as Akamai, are widely used
as the backend infrastructure for websites,
software updates, streaming video services, and gaming
services. Applications like the Steam store,
the BBC iPlayer, Netflix, and Windows Update,
have all run on CDNs at various
times, although many of them now use
their own infrastructure.
CDNs are essentially large-scale highly distributed web
caches. They provide local copies of data,
to improve performance compared to having to
fetch the content from the master site.
The secure HTTPS connection is therefore from
the client to the CDN, rather than
from the client to the original site.
This introduces an intermediary into the path.
The CDN now has visibility into what
requests a client is making, in addition
to the original service.
Performance is better, but you’re forced to
trust a third party with information about
what sites you’re visiting.
Equally, the data has to get to
the CDN caches somehow, and has to
be protected as its fetched from the
original server to populate the cache.
You have to trust the CDN to
do this correctly. As a user of
the CDN, you have know way of
knowing how, or indeed if, that data
In many cases, data is moving between
two users. Is that data encrypted end-to-end
between the two users? Or is the
data encrypted between the users and some
data centre, but visible to the data
centre? The difference can matter: if the
data centre has access to the unprotected
data, it may be used to target
advertising, and it’s much more likely to
be accessible to law enforcement or government
Many applications use some form of in-network
processing. For example, video conferencing systems often
use a central server to perform audio
mixing and to scale the video to
For example, in a large video conference,
if many users are sending video,
then all the video goes to a
central server. That server only forwards high
quality video for the active speaker,
and sends a smaller, more heavily compressed,
version for the other participants.
This reduces the amount of video sent
out to each of the participants,
and prevents overloading their network connections.
This is a good thing.
But, it also means that the central
server has access to the audio and
video. The server can record that video,
if it so chooses, and potentially share
it with others. That may be a
concern, depending on what’s being discussed.
An alternative way of building such an
application leaves the data encrypted, and doesn’t
give the server access. This increases the
privacy of the users, since the data
is encrypted end-to-end and isn’t available to
the server, but means that the server
can’t help compress the data and manage
the load, and it means that server-based
features, like cloud recording and captioning become
much harder to provide. It trades-off features
and performance, for increased privacy.
When building networked applications, it’s important to
consider how the network protocol is implemented.
Network protocols can be reasonably complex,
and difficult to implement. They have a
syntax and semantics, in many ways similar
to a programming language. And, like a
program, the protocol messages your application receives
may contain syntax errors or other bugs.
What do you if, if the protocol
data you receive is incorrect?
A frequently quoted guideline is Postel’s law.
This is named after Jon Postel,
the original editor of what became the
IETF’S RFC series of documents, and an
influential contributor to the early Internet.
Postel’s law can be summarised as “Be
liberal in what you accept, and conservative
in what you send”.
That is, when generating protocol messages,
try your hardest to do so correctly.
Make sure the messages you send strictly
conform to the protocol specification.
But, when receiving messages, accept that the
generator of those messages may be imperfect.
If a message is malformed, but unambiguous
and understandable, Postel’s law suggests to accept
That’s fine, but i’s important to balance
interoperability with security. Don’t be too liberal
in what you try to accept.
Having a clear specification of how and
when you will fail might be more
Postel’s law says “Be liberal in what
you accept, and conservative in what you
That makes sense if you trust the
other devices on the network.
It makes sense if the problems with
the messages they send are honest mistakes,
and not intended to be malicious.
The network has changed since Postel’s time,
As Poul-Henning Kamp, one of the FreeBSD
developers, says “Postel lived on a network
with all his friends. We live on
a network with all our enemies.
Postel was wrong for todays internet”.
This is an important point.
Any networked system is frequently attacked.
There are many people scanning the network
for vulnerabilities. Actively trying to break your
applications. If you write a server,
and make it accessible on the Internet,
then people will try to break it.
This is not because you’re a target.
It’s because machines and network connections are
now fast enough that it’s possible to
scan every machine on the Internet,
to see if it’s vulnerable to a
particular problem, within a few hours.
It’s not personal. But your server will
The paper shown on the slide,
on “The Harmful Consequences of the Robustness
Principle”, by Martin Thomson, talks about this
in detail, and gives detailed guidance on
how to handle malformed messages. If you
write networked, applications, I strongly encourage you
to read it.
One of the key points made is
that networked applications work with data supplied
by un-trusted third parties.
As we’ve discussed, data read from the
network may not conform to the protocol
specification. This may be due to ignorance,
bugs, malice, or a desire to disrupt services.
One of the most critical lessons is
that you must carefully validate all data
received from the network before you make use of it.
Don’t trust arbitrary data that comes from
another device over the network. Check it
carefully, and make sure it contains what
you expect, before use.
This is especially important when working in
scripting language, that often contain escape characters
that trigger special processing. The cartoon on
the slide is an example. The idea
is that the software processing the student’s
name sees the closing quote, and interprets
the rest of the name as an
SQL commands to delete the student records
from the database.
It’s a silly example.
But it’s surprising how often similar problems,
known as SQL injection attacks, occur in practice.
And similar problems occur in many other
programming languages. This is not just an
Be careful how you process data.
And, in general, be careful how you
write networked applications.
The network is hostile.
Any networked application is security critical.
Anything that receives data from the network
will be attacked.
When writing networked applications, carefully specify how
they should behave with both correct and
incorrect inputs. Carefully validate inputs and handle
errors. And check that your code behaves
as expected. Try to break your application,
before someone else does.
If you’re writing your application using a
type- or memory-unsafe language, such as C
and C++, take extra case, since these
languages have additional failure modes.
It’s very easy to write a C
or C++ program that suffers from buffer
overflows, use after free bugs, race conditions, and so on.
Such bugs are almost certainly security vulnerabilities.
As a rule of thumb, if you’ve
written a C or C++ program,
and can cause it to crash with
a “segmentation violation” message, then that’s probably
exploitable as a security vulnerability.
Have you ever managed to write a
non-trivial C program that never crashes in that way?
This is why network programming is difficult.
The network, today, is an extremely hostile environment.
Networked applications are security critical,
and writing secure code is a very difficult skill.
If you have the choice, use popular, well-tested,
pre-existing software libraries for network protocols
where possible, especially do so for implementations
of security protocols such as TLS.
And make sure to update these libraries
regularly, because problems and security vulnerabilities are
The best encryption in the world doesn’t
help if the endpoints can be
compromised and the data stolen before it’s encrypted.
This concludes our discussion of secure communications.
In the first part, I spoke about
the need for secure communication, and some
of the challenges and trade-offs in enabling security.
In the second part, I discussed the
principles of secure communication in abstract terms,
talking about symmetric and public key encryption,
and how these are combined to give
hybrid encryption protocols. I spoke about digital
signatures to authenticate data, and about public
key infrastructure and certificate authorities.
I spoke about the Transport Layer Security
protocol, TLS 1.3, that instantiates hybrid encryption
and digital signatures into a concrete network
protocol, that secures web traffic and other applications.
And, finally, I’ve discussed some issues to
consider when writing networked applications.
Ensuring communications security is a difficult problem.
It’s technically difficult, because you need to
write extremely robust software, and need to
design secure network protocols that use sophisticated
cryptographic mechanisms. And it’s politically difficult,
because there are some extremely sensitive policy
questions around what information should be protected,
and against whom.
The TLS 1.3 protocol is the current
state-of-the-art in secure communications. In the next
lecture, we’ll move on to further discuss
its limitations, and some of the ways
in which people are trying to improve
network security and performance.
Lecture 3 discussed secure communication. It started with a discussion of
the need for security, and the issues with balancing security, privacy,
and the needs of law enforcement, regulatory compliance for businesses,
and the need to effectively manage networks. It then moved on to discus
the principles by which secure communication can be achieved, via a mix
of symmetric and public key encryption and digital signatures. And it
outlined how these are used in the transport layer security protocol,
The focus of the discussion will be to check your understanding of
the principles of security. How do symmetric and public key
encryption work, and how are they combined in practice? And how do
digital signatures work? The mathematics behind this work
is outside the scope of this course, and will not be discussed, but
the principles are important.
Discussion will also consider how does TLS use these techniques to
ensure security. How does the TLS handshake work? What
guarantees does TLS provide to applications? How does the use of 0-RTT
session resumption change those guarantees and what benefits does it
provide in exchange?
Finally, the discussion will also focus on the need to consider the
different impacts of providing secure communication. There are clear
benefits to providing security, but also some unexpected costs that can
lead to tension between users, vendors, network operators, businesses
and governments. The discussion will start to highlight some of these
issues. What should we encrypt? What are the trade-offs of
privacy vs law enforcement access? What doesn't encryption protect?