The final lecture will briefly review the course aims and contents,
then conclude with a forward-looking discussion of possible future
directions in which the network might evolve.
The final part of the lecture reviews how the Internet is changing,
to reduce latency, improve security, and avoid protocol ossification,
and discusses some of longer-term research work driving the evolution
of the network.
Slides for part 1
In this final lecture, I want to
talk about some possible future directions for
the development of the network.
So this course has focused on how
the Internet can change and evolve to address
some coming challenges.
It’s focused on the issues of how we establish
connections in an increasingly fragmented network,
thinking about the issues with network address
translation, the issues with the rise of
IPv6 and dual-stack hosts,
and I’ve spoken in some detail about
the challenges in establishing connections when the
machines are not necessarily in a common
addressing realm, and when there are multiple
different ways of potentially reaching a machine.
And this is techniques such as
the ICE algorithm for NAT traversal, the
Happy Eyeballs technique for connection racing for IPv6.
I’ve spoken about some of the issues with encryption, and
protecting against pervasive network monitoring,
and protecting against, and preventing,
And this led to some of the
design of QUIC, with the entire protocol,
including the overwhelming majority of the transport
layer headers, being encrypted, and those which
are not encrypted being greased in order
to allow evolution.
And that's partly a security measure,
and it's partly an evolvability and
It’s looking at ways in which
we can keep changing the protocols by
deploying encryption to prevent middleboxes
interfering with our communications.
And I’ve spent a fair amount of
time talking about how we can reduce
latency, and support real-time and interactive content.
And, partly, this comes in, again,
in the design of protocols like QUIC,
in the design of TLS 1.3 with
reducing the number of round trips needed
to set up a connection.
It comes in, in
systems like content distribution networks that move
content nearer the edges, near the customers,
to reduce latency.
And it comes in, in the design
of real-time applications and protocols like RTP.
And I spoke about some of the
issues with congestion control, wireless networks,
and content distribution, and congestion control,
and how to make much more adaptive applications.
We’ve considered some of the challenges in
content distribution networks and naming, and how
you can securely
find the names for a piece of
content on the Internet that you want
to access, and how to do that
without being subject to phishing attacks.
And some of the challenges, and the
tussle for control of the DNS and naming,
and how those relate to censorship and
filtering, but also how the DNS can
be used to support content distribution networks.
And we've spoken about routing, and efficient
content delivery, in the last lecture.
And some of this leads into discussions
about decentralisation of the network, and the
rise of hyper-giants and content distribution networks,
that centralise content onto
a small number of providers.
As we've seen, there’s a large number of challenges.
And as a result of that,
the Internet is actually in the middle
of one of its most significant periods
of change, that certainly I’ve seen in
the time I’ve been involved in networking.
We're seeing IPv6 beginning to be significantly deployed.
And, partly, this is providing for increased address space,
and increasing numbers of devices on the network.
But it is also flexible enough,
because of the size of that address
space, that people are starting to look
at what they can do with IPv6
to evolve the way the network is being developed.
It's flexible, in that it’s got enough
bits in the address, that semantics can
be assigned to the addresses. So bits
can have meaning other than, perhaps,
just the location of a device on the network.
And it's got a very flexible header
extension mechanism that can be used to
provide extra semantics,
and provide application semantics, as part of
the packet headers, to allow special processing.
And people are starting to explore the
things you can do with IPv6 as
it gets more widely deployed.
We've seen TLS 1.3 be rolled-out,
and massively improve and simplify security.
And we've seen it be incorporated into
the QUIC protocol, as the basis for
future transport evolution.
And, I've described QUIC as essentially a
better version of TCP, or as an
encrypted version of TCP, which combines the
goals TCP and TLS, and also adds
this idea of multi-streaming.
But I think QUIC is actually going
to be the basis for a lot
more developments, and a lot more evolution.
We’re already seeing this, to some extent.
There is already a datagram extension to
QUIC going through the standards process,
to start supporting real-time applications effectively over
QUIC, and it's pretty clear that we're
going to see a lot more evolution
and development in that space, with people using QUIC
as the basis for future transport protocols,
for real-time and interactive applications, and so on.
And this has led to the coming,
I think, adoption of HTTP/3 as the
basis for evolving the web,
and HTTP growing beyond web documents to
include a much richer set of real-time
and interactive services.
And, in parallel to this, I think
we've seen the rise of changes to
the DNS, many ways of running DNS
over encryption, whether it's a DNS over
HTTPS, or over TLS, or over QUIC,
in order to get secure name resolution.
And to avoid some of the control
points. And we’re seeing CDNs and overlays
increasingly making use of DNS
for directing hosts of the content.
And I think we're seeing an increasing
tussle for control, between the different industries
and the different providers.
On the one hand we've got
the model I’m describing, with QUIC,
and TLS, and HTTP/3, and encrypted DNS
to allow the application providers, and their
customers, and the end-users, to talk directly,
and to limit the visibility of the
network into that communication.
And, on the other hand, we have
operators trying to build application awareness into
their networks, trying to increase the communication
between the network and the endpoints,
to improve performance, and to sell enhanced services.
And there’s a tussle, where it’s not
clear how it's going to play out.
So that's the current set of developments
in the network. And that's the areas
where I've been trying to focus on
in this course, describing how the network
is currently changing.
In this last part, I’d like to
talk a little bit about some of
the longer-term challenges, some of the longer
term directions for the network, and think
about where the network might be going,
not in the next five years but
in the next 10 to 20 years.
And what might be the long-term future
developments of the Internet.
And, to be clear, what's coming in
the remainder of this part is speculative.
It’s my biased opinion of where I
see the network going, based on my
interests, based on the research that I
have seen happening.
But it's very much speculative. It may not come true.
But it's pointing to areas which I think
are interesting developments.
And nothing in this section is going to be assessed.
So where's the network going in the long term?
Well, I think, to get some understanding of
that, we need to look at the process by which
new ideas, new research, get incorporated into the network.
And, on the one hand, what we
see on the left of this slide, we have
the organisations that promote research
into computer networks.
The Association for Computing Machinery,
the USENIX Association, and the IEEE,
all of whom sponsor
both industrial and academic research in this
area, all of whom publish research in this area.
And this is the pure research side
of network development. This is people speculatively
trying to understand how the network could change.
In the middle, you have organisations like
the IRTF, the Internet Research Task Force,
which try to form the bridge between
these research organisations
and the standards organisations,
such as the IETF, which develop the
standards which we actually deploy.
And one of the other activities I have,
is that I chair the IRTF.
And the IRTF is a body which
promotes the evolution of the Internet.
It’s promoting the longer-term research and development
of the Internet protocols, and, as I say,
it's trying to bridge these organisations together.
And so by looking at some of
the work that's happening in the IRTF,
we can perhaps get an idea of
how the network might evolve, and what's
coming down the pipeline towards standardisation.
So the IRTF is organised as a
set of research groups, which focus on
longer-term development of ideas and protocols.
And it's organised to provide a forum where
the researchers and the engineers can
explore the feasibility of different research ideas.
And where the researchers,
developing ideas for the future of the
network, can learn from the engineers,
and the operators, who actually build and
operate the Internet.
But, equally, where the standards developers,
the engineers, the operations community, the implementors,
can learn from the research community.
Where the two can come together.
As I say, it’s organised as a
set of research groups. There’s currently 14
research groups that are listed on the
slide, I’ll talk about these in a
little more detail in a minute.
And as we can see, they’re covering a wide range of topics.
And there’s also an annual workshop we
organise, to help bring the communities together.
So what do the research groups do?
Well, they’re focused on several different topic areas.
One of which is the
space around security, and privacy, and human rights.
The cryptographic forum research group, the CFRG,
focuses on long-term development
of cryptographic primitives, and cryptographic techniques,
and guidance for using those techniques.
This is a research group looking at
new cryptographic algorithms, replacements for AES,
replacements for elliptic curve cryptography,
new elliptic curve algorithms, and the like.
And this is focused, very much, on
techniques, cryptographic techniques,
which support various privacy-enhancing
technologies. And it's beginning to focus on
post quantum cryptography, and cryptographic techniques that
can work in a world with working
We have a privacy-enhancing technologies group,
which is focused on
the challenges of metadata in the network,
focusing on the challenges
of building a network that doesn't use
addresses, or that hides IP addresses,
in a way that prevents tracking.
And ways of providing privacy-enhancing logins,
and authentication tokens, and the like,
that can avoid tracking.
And we have a human rights protocol
considerations group, which is beginning to look
at, and understand, how Internet protocols and
standards affects human rights and privacy at
the Internet infrastructure level.
And it's looking at the
right of freedom of association on the
Internet, for example, and how that's affected
by protocol design.
It’s looking at
how protocols affect inclusivity and access,
and so on, and it's looking at
the politics of protocols.
And these three groups, are looking at
the interplay between security, privacy, and human rights,
and trying to raise awareness of
the broader societal and policy issues in
the standards community.
There's an interesting, I think, thread of
technical development, looking at the combination of
networks and distributed systems.
Looking at speculative new architectures for the internet,
which either emphasise data or emphasise computation.
If you think about the current network,
IP addresses identify devices, they identify attachment
points for devices in the network.
And these groups are looking at
the generalisation of content distribution networks,
and web caching infrastructure, and thinking about
what would happen if we replaced IP
addresses with content identifiers?
So the network would route towards particular
items of content, rather than routing packets
towards particular locations.
Or they’re looking at
generalising the network so it routes towards
addresses, and routes toward was named functions,
which are generalising the idea of serverless computation.
And the idea of both of these
groups is to think about what might
happen if you rearchitect the network around
either content, or computation, or both.
And think about the merger of communication,
data centres, computation, and data warehouses,
to form one large distributed system,
rather than an interconnection network which connects
compute devices, data stores, at the edges.
And thinking about what are the implications
for this change, towards a network with
ubiquitous data, or ubiquitous computation, for the
content provider/consumer relationship.
Thinking about will this help democratise the
network, will it help ensure a more
will it help with hosting content throughout
the network in a way which empowers consumers,
or will it simply ossify the current
roles, and the current content distribution networks,
and large scale cloud providers.
And it’s looking at alternative architectures,
and how it can influence the way forward.
And all this leads to networks which
no longer have IP addresses as their
core, that no longer have the Internet
Protocol as their core, but are much
more about distributed computation and data.
There's a research group looking into a
technique, known as path aware networking.
And this is the idea of trying
to explore what can happen if we
make the applications, and the transport protocols,
much more aware of the network path,
and the characteristics of the network path.
Or, similarly, if we make the network
much more aware of the applications and
the transports that are running on it.
And this potentially has benefits, it potentially
has benefits for improving the quality of
service, for allowing applications to request special
handling in the network to improve performance,
and to maybe request low-latency service,
or specialised in network processing.
But, equally, it has potential challenges,
because it introduces a control point.
It introduces a way for the operators
to control the types of applications that
can run on the network.
And there are some significant questions around
trust, and privacy, and network neutrality,
which are relatively poorly understood.
And this is an area where we
see the IETF community currently seems determined
to enter a standardisation phase.
There's a technique called segment routing,
and segment routing in IPv6, SRv6,
which is starting to
work its way through the standardisation process,
and it's starting to get some traction,
which is building some of this application
awareness into the network infrastructure.
And there’s technique called APN,
which is an application-aware networking scheme,
that’s going in the same direction.
And, a number of
large Internet companies are pushing in the space.
In the IRTF, the research groups,
I think, they’re looking at some of
the more broad questions, trying to
understand what are the privacy implications,
what are the security implications, and what
are the incentives for both the endpoints to deploy
these features, for the applications to deploy
these path-aware features, and for the operators
to enable them. And how does it
shift the balance of control between the
applications, and the end-users, and the network operators.
And I think there's some interesting unsolved
questions in that space.
In the longer term, we have a
group looking at designing the quantum Internet.
And the idea, here, is that it
seems likely that people will
manage to build working, large-scale, quantum computers
in the next few years.
And if they do that, they will
want to network and interconnect those computers.
The quantum Internet group is looking at
how we can architect a network that
provides quantum entanglement as a service.
It’s looking at how to build global-scale
distributed quantum computers.
And this is very much the exchange
of Bell Pairs; it’s the exchange of
quantum entangled state.
And it’s leading to a surprisingly traditional
network architecture. A control plane that looks
like the control plane used in a
lot of Internet service provider networks for
But rather than managing circuits and traffic
flows, it manages the setup of optically
clear paths, which can be used to
transmit entangled photons,
to manage entangled quantum state.
And this group’s coming to the conclusion
of its architecture development phase, and is
starting to build experiments, starting to prototype
these systems, and see if they actually work.
And people are actually starting to build
the initial versions of the quantum Internet,
and do at least small-scale experiments with
networked quantum computers and quantum entanglement.
And, perhaps more pressingly,
we have a group, the Global Access
to the Internet for All group,
which is looking at global access and
sustainability. And it's looking about how to
address the global digital divide.
It's trying to share experiences and best
practices, foster collaboration in helping build,
and develop, and make effective use of
the Internet in rural, and remote,
and under-developed regions. And there’s a lot
of interest, a push towards community run,
community led, networks
to provide a more sustainable, more locally
run network, which reflects the needs of
the local communities, rather than the mega-corporations.
And it's trying to develop a shared
vision towards building a sustainable global network.
And, most of the focus here is
on developing countries, and on
building a fairer, more sustainable, network in
those parts of the world. But it's
also looking at access for less developed,
perhaps more rural regions, of
the world. And there's been some interesting
work trying to build community networks in
the Scottish Highlands and Islands, for example,
where there’s more constrained infrastructure.
But it's also talking about energy efficiency,
and renewable power, and building networks which
work much more sustainably.
And there are other groups, which I
don't have time to talk about in
detail, looking at measuring and understanding network
behaviour, in the measurement and analysis protocols group.
Looking at developing new congestion control,
network coding algorithms to improve performance and
make applications more adaptive.
Looking at intent- and
artificial intelligence-based approaches
to managing and operating networks.
Understanding the issues of trust, and identity
management, and name resolution, and resource ownership,
and discovery, in decentralised infrastructure networks.
And looking at some of the challenges,
the research challenges, from initial, broad,
real-world deployments of Internet-of-Things devices,
and how we can make those devices more sustainable,
more programmable, and more secure.
The key thing I want to get
across is that the network, the Internet, is not finished.
The protocols and fundamental design are still
evolving, they're still changing.
perhaps, a view of networking you get
from reading various textbooks that
the Internet is IPv4, and TCP, and the web.
And it's always been that, and it always will be that.
But nothing could be further from the truth.
The fundamental infrastructure has
massively shifted over the last few years.
And I think we're in the middle of this enormous
transition, and we are getting rid of
IPv4, and we are getting rid of
TCP, and we're getting rid of HTTP/1.1
and the traditional web infrastructure.
With IPv6 and QUIC, we're seeing a
radical restructuring of both the network infrastructure
layer, the IP layer, to support more
addresses, and to support more programability,
to support more application semantics.
But also the transport and the web layers,
to replace TCP, and better support real-time
and multimedia transport, and to be more
secure and more evolvable.
And the network is in the middle
of this enormous shift.
And, looking forward, I think there are
potentially even more significant changes to come,
with a merger of computation and communication
and data centres as one
global-scale distributed system.
With some of the ideas around path awareness,
the quantum Internet,
some of the security and sustainability challenges.
The network is not finished.
The network is keeping changing.
There’s still some exciting developments to come.
And that’s, essentially, all I have for this course.
To wrap up.
There will, of course, be an assessment at the end.
There'll be a final exam, and it
will be worth the usual 80%,
and will be held in the April/May
time frame as expected.
The exam is structured as a set
of three questions. It’s an "answer all
three questions” rubric.
And it will be focused on testing
your understanding of networked systems.
When answering the exam questions, tell me
what you think, and justify your answers.
The type of online, open book,
exams that we are forced to do
these days, focus much more on deeper
understanding of material, and much less on
book-work and memorisation.
There's little point asking an exam question which
tests your memory, when you're doing this,
when you're doing the exam online from
home, and you have Google next to you.
So the questions will be focussed more
on testing your understanding, than on your
testing your recall.
There are past exam papers on moodle.
The past exam papers go back some number of years.
As you may perhaps expect, the exam
questions from 2020 are probably more representative
of the style of this year's exam
than the older papers, although there are
certainly questions in this style going back
for many years.
The assessed coursework, the marks will be
available shortly, and
I apologise that it's taken a little
while to talk to mark some of that.
There's no specific revision lecture, but we
have the Teams chat, and we have
email, so please get in touch if
you have questions about the material.
And, looking forward to next year,
if you're interested in doing Level 4
or MSci projects relating to networked systems,
then please get in touch with me,
send me email on the address.
I’m always very keen to work with
motivated students to develop projects.
My particular interests, I think, are around
improving Internet Protocol standards and specifications,
and working with the IETF and IRTF
communities to improve the way we build standards.
They’re about improving transport protocols,
real-time applications, and QUIC.
They’re about building alternative networking APIs,
and thinking about how we can use
modern, high-level, languages like Rust to change
the way we program networks, make network
programming easier, more flexible, and higher performance.
And they’re about measuring and understanding the network.
So if you have any interest in
any of those topics, please come and talk to me.
I tend to try and set projects,
do research, which has a strong focus
on interaction with the research communities,
interaction with the IETF standards.
And I have a range of project ideas,
and projects can go in a
range of different ways, some of which
are very strongly technical, some of which
focus much more heavily on the standardisation process,
and the way in which standards,
and protocols, are developed, and are looking
at the social and political aspects of
the way the Internet is developing.
So, as I say, if you have
an interest in any of these topics,
please come talk to me.
And that's all we have.
That's what I want to say about networks.
Thank you for your attention over the past few weeks.
I hope you have found some of the material interesting,
and if you have questions or comments
or things you'd like to discuss further,
please do get in touch. Thank you.
Discussion will be open-ended and student driven, focussed around the
course material in general and the suggested future directions.