Layer 5, the Session layer, is really a nuts-n-bolts layer that is difficult to explain in context. And the implications are minimal, so we’re going to skip over that. There are some relevant points to VPN and authentication, but the real good parts are in layer 6.
Layer 6 is the subject of a lot of debate. And boy, is it a geeky debate. Think “how many Picards can dance on the head of a pin” kinda debate. I won’t get too into it other than to say some people would disagree with my thoughts on this.
(For my fellow geeks who would disagree…here’s it is in a nutshell: Layer 6 is where data interoperability lives–compression, encryption, text conversion, etc. The line is a little blurry with layer 7. But in my interpretation, the mechanisms, programs, and code in layer 7 may be very different, but they are reading the same data and successfully interpret it. That action indicates a lower abstraction layer, and that is layer is layer 6.)
For the non technical, that means a JPG file from your home security camera also works in a web browser. And an MPG from your iPhone can also play on your Android. And a PDF can work across multiple devices.
So the most important implication for freedom and democracy at Level 6 is a standard form of media across all devices and programs.
Not too long ago, you couldn’t watch a video from England (and most of Europe) on a device in the US. They used two completely separate video standards. You needed a different video tape, a different VCR, and a different TV (or “tele”).
During the Soviet Coup attempt, the coup leaders captured President Gorbachev and sent him away to “rest”. While captive, Gorbachev’s son-in-law Anatoly secretly recorded 4 messages from him to the outside, and cut the physical video tape up so that it could be smuggled out.
Interestingly, I am unable to find any video from this tape online. That may very well be due to the limited ability to encode the format of the tape, since the format of that tape was probably very….Soviet. But you can see a video of Gorbachev describing his captivity by clicking here.
I read somewhere that the written word will be humanity’s only true form of time travel. It is a method of communicating across the ages thoughts directly from one mind to another. When you read a word, the writer reaches out across minutes, years, or eons and puts those thoughts directly into your head for examination.
Video, audio, and other means have a similar effect but there are so many competing factors. The written word is the most direct method.
Is it any wonder, then, that God chose writing to convey His will across all these hundreds of years? The uniqueness of this medium is manifest in the gravity of the phrase “the Word of God”. Indeed, in John 1 God Himself is defined as “The Word”.
Ok so shove me in the shallow waters here. I’m only fixin’ to talk about the government.
It is a trip to think that the direction of government can be completely changed with words. Pamphlets, newspapers, doorhangers, and facebook post can all convey thoughts to a critical mass of people that will change the course of history. It’s why our First Amendment is so important.
There are many other examples of how this has happened throughout history, but I want to focus on a collection of moderately obscure works called the Federalist papers.
Most people are familiar with what the Federalist Papers are. But it seems like very few people (including me) have actually dug into them to any degree. This is understandable given the sheer volume and density of the material. But in a time when the validity of the US Constitution is questioned at the highest levels of government, I think it might be a good exercise to dig in to such a thorough effort to justify it’s adoption.
In a nutshell, the Federalist papers (simply labeled “Federalist No. #” where # is a Roman numeral) were a series of articles across several New York newspapers arguing in favor of a new Constitution vs. the old Articles of Confederation.
Federalist #1 was published in “The Independent Journal” on October 27, 1787, and was written Alexander Hamilton. One month earlier, the new Constitution had been proposed.
It has been frequently remarked that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.
-“Publius” in Federalist No. 1
Here are my summarizing points:
This nation is unique. There are fundamental questions on how a country should function that are being addressed here and nowhere else.
We’re at a turning point here. Either we update how our country is going to work, or things will descend into chaos. It will affect all of humanity negatively.
That chaos will create (and is creating) power for some people, so they will oppose a new constitution.
They will try to hold on to this power by painting the new constitution as oppressive.
Some people are also planning to dissolve or split the union of states to create more power for themselves.
It takes a strong government to protect liberty.
I’m writing under a pseudonym so that the arguments will stand for themselves.
We’re going to go over the utility of a unified, federal government for your political prosperity.
We’re going to show how the existing Articles of Confederation aren’t good enough.
We will show how a new government as proposed in the Constitution is necessary to preserve our original ideas for a republic. We will list the reasons why it will do this.
We will compare it with the current state constitutions.
We will also show how a unified republic as defined in the Constitution is more secure.
So essentially Federalist #1 is an opening statement for the series. It talks a little about why they are being written, what they hope to accomplish, and what points they are going to make.
It’s interesting to think that some of the most fundamental values and structures of our country were once open to such debate. I’m looking forward to digging in further.
Like any child of the 80’s who’s into tech, I’m fascinated by the idea of self driving cars. The only thing cooler would be flying cars, but it seems we’ll have to keep crawling before we can fly.
Thanks to Google and Tesla, self-driving automobiles are now a real possibility. In fact, Tesla’s communication and Musk’s relative record of success have made it more than a possibility. It’s an expectation. There is now a baked-in expectation that self-driving cars will revolutionize the world of transportation.
However, the reality is proving to be more difficult. Delays and complications abound. And predicting timelines has become foolhardy.
The obvious issue is that driving is very, very complicated and unpredictable. So much so that human minds get routinely confused. It just makes sense that artificial minds will have the same issues. It makes sense that this is a very difficult problem to solve and it will take awhile to do so.
But there may be ways to speed up the process. And there may be tragic events that will suddenly slow down the process by many years or decades if we’re not smart about all this. Let’s start with the latter.
Lidar and Radar and Cameras, oh my! The complexity in feeding information to self-driving AI is very complicated. It should give us new appreciation for our own 5 senses. Source: Boston Consulting Group
Artificial Intelligence needs tons of data to learn. This means that AI engines will have to spend huge amounts of time to get the tons of data needed to learn how to drive our roads. I think we’re learning that our roads are more complicated and unpredictable than we thought. Which means the AI behind autonomous driving will take more and more data.
Telsa uses “shadow mode testing”, in which the AI engine pretends to drive a car, and its decisions are tested against the actions of a real driver. The large number of Telsa drivers helps in this regard.
But this illustrates the problem. Artificial Intelligence and machine learning depend on mistakes. The systems make mistakes and learn from them. They makes an enormous amount of mistakes. The more complex the environment, the more data you need. And the more data you need, the more mistakes will be required to generate that data.
Yet driving is dangerous. A mistake in driving can cost lives. So the question quickly becomes “what is our tolerance for mistakes by self-driving cars?” Are we willing to sacrifice lives so that cars can learn to drive themselves?
I think the answer is very likely to be “no”, and probably a more resounding “no” than we anticipate. There have already been some episodes of loss-of-life related to autonomous cars. And there have been odd attempts to cover up some close calls. But the day we have a high profile event–a loss of a family of four, a school bus accident, an elderly veteran run over–public (and legislative) opinion will shift quickly against the current tech.
An episode like that will be tragic for the individuals involved, but it will also set the autonomous vehicle effort back for decades. People are too important, and this tech has too much potential to let that happen. So what can we do?
When it comes to autonomous driving, all the attention is on the cars themselves. That make sense given the ‘cool factor’ and the agency of the companies making the cars. This is where the work is.
Hardly any attention is paid to the technology of roads themselves. Even less attention is paid to the technology of planning, design, and construction of the roads. It’s just accepted that the roads are what they are.
A huge part of advancing autonomous vehicles, I think, is to develop a set of standards and guidelines that will certify a road for autonomous cars. Autonomous driving should require this certification. It would include things such as:
Universal, standard lane markers, including curb and hash marks in turns
Assisting sensors in blind corners and unprotected turns
Redesign of crosswalks and bike lanes to protect pedestrians and bikers
Standardization of other vulnerable areas such as loading areas for passengers
Indicators of places where pedestrians and other vulnerable individuals are likely to be present. “high caution” areas that will tell AI to enter a heightened state of precision and sensitivity.
Appending or tagging some of this information to the GPS standards
Federal and state highways would be pretty easy to outfit, as they already follow standard guidelines. The obvious issue will be local and rural roads.
Google’s self-driving project addresses part of this situation by mapping every area’s detail ahead of time. This approach has a similar effect, in that it ‘certifies’ every road by documenting its features ahead of time. There are a couple problems with this, however.
First, it is a daunting task. Even with the resources at Google’s disposal, it is nearly impossible to map every road. Indeed, Google street view still misses huge chunks of coverage despite the significant effort to cover everything. And you can’t underestimate the tendency in some places to consider mapping a privacy concern.
Second, streets change and those changes could have significant implications. Using street view as a reference, it’s not uncommon to find places that haven’t been visited for many years…again despite a very comprehensive effort by Google.
Adding and adopting street standards and certification would help Google’s approach and speed up the process.
There are no guarantees in life. Walking out the door has its own level of risk. But when it comes to life-and death safety, we should mitigate these risks as much as practically possible. When it comes to AI, autonomous driving, and self-driving cars, I think it’s obvious that a set of standards and a requirement for certification is required. Moving in this direction now will allow us to leapfrog both delays in adoption, and tragedy in achieving adoption.
Layers 3 and 4 are then “network” and “transport” layer, respectively.
While layer 1 and 2 had to do with local traffic, the next two layers create the standards and protocols by which all these local networks can talk to each other (“internetworking”). They scale to a global scale.
OSI Layer 3 – Network Layer
The network layer that currently dominates the world is the IP protocol. Nearly everyone has heard of an IP address by now, probably in frustration as they tried to configure a home device or internet connection.
The power of the IP protocol is in its superior route-ability. There have been other protocols that work well in certain circumstances, but IP proved to be the brilliant solution that literally created the internet.
IPs superior routability stems from it’s super simple addressing scheme, in which you take a bunch of numbers (an address), apply another set of numbers (called a mask) and end up with a neatly sliced network-host dileneation.
You can think of the network as the street you live on, and the host as the house in which you live. In the following examples, the blue is the network/street, and the red is the host/house.
But IP addressing is far more powerful than a street address, in that the networks can then further be sliced up using masks. A mask is another set of numbers that defines which part of the address is being addressed. So you could further say:
Where orange is the locality and green is the larger area. This slicing can get even more granular and complex as needed.
I won’t risk complicating a simple and elegant system in trying to address it in one blog post. But the upshot is that millions of devices called routers can reliably and effectively transport huge amounts of data through multiple other routers and back. It’s not uncommon for traffic to go through 10-20 routers on its way to a destination.
OSI Layer 4 – Transport Layer
Layer 4 is the layer that defines a conversation. Take this human example of TCP (Transmission Control Protocol):
Sally: Hello is this joe?
Joe: Yes! This is joe.
Sally: Great! Here’s some info…..*garbled*
Joe: I’m sorry, can you repeat that? Also can you speak a little slower?
Sally: Sure…here…is….some…information…for you. Did you get that?
Joe: Yes I got it. I will deliver it to the appropriate party.
This conversation is a representation of a TCP conversation that happens trillions of times a day. In contrast, here’s an example of UDP (User Datagram Protocol:
Sally: Hey, I’m shouting this to Joe! Joe, if you can hear me here some information for you!
Both of these conversations do essentially the same thing, but with a different set of requirements. These requirements are defined by a layer 4 protocol.
Across layer 3 and 4, there are several protocols and combinations of protocols that assist communication. They help control speed of transmission, choosing the best route between hosts, and several other critical functions that help ensure data gets from point A to point B.
Implications for Freedom and Democracy
The creation of a redundant, reliable packet-switched (vs. circuit-switched) network of communications was created for two reasons. First, the number of computers in the world was very small, and people needed access to them without being physically present. Second, the military needed a way to maintain control of nuclear resources and communications in the even of a nuclear war.
These two goals are somewhat in dispute. And that makes complete sense given the supply of movie plots where scientific discovery was unwittingly being used for the military. It’s pretty obvious that everyone involved had their own goals in mind.
But, the implications for today are clear. Using these technologies, you can send data reliably from a very localized device to another very localized device anywhere around the world. We are seeing this play out now in Ukraine. This is a unique enough situation that I will post about it separately.
Because these systems were designed to create access via large scale, they ensure that anyone in the world can communicate with another. They can do this directly and without reliance on a mediator or central 3rd party.
Because these systems were designed, at some level, to survive nuclear hostilities, they are inherently robust and redundant. Getting in the way of these connections is very hard.
Freedom loves communication and the free flow of information. Indeed, it depends on it. Layers 1-4 are great enablers of freedom.
So let’s look at the first 2 layers of the OSI model. These are the “Physical” layer and the “Data Link” layer. These layers are separate and distinct, but in practical application they are usually part of the same implementation.
The physical layer (layer 1) is, as it implies, concerned with the physical elements of a connection. Voltages, pin-outs, mechanical considerations, connectors, etc. In the case of fiber optics, it deals with wavelengths and supported configurations such as single or multi mode. Because it is physical, this layer tends to be focused on local networks or networks with fewer participants.
Because of the radically different technologies out there at the physical layer, there is not really a standard unit of data. It can be very different depending on topology.
The Data Link layer (layer 2) defines the formats of data that will be communicated on top of layer 1. How data is divided up into chunks, how things on a local network will be addressed (such as MAC addresses), and how a system will know what chunk of data belongs to which device. These are usually called “frames”.
For layer 1 and 2, most people will have used twisted pair ethernet or various forms of WiFi. If you used a computer at work in the 80’s or 90’s, you may have used other forms of Ethernet or even Token Ring. If you’re really fancy, you may have fiber ethernet coming to your house.
Whatever the case, the implication for freedom and democracy is interoperability. Layer 1 and 2 ensure that your devices can talk to each other at the most basic level.
Information is very important to freedom and democracy. Indeed, it’s why the 1st amendment exists and has been upheld and bolstered as technology advances. Being able to consume and produce information freely is vital to the concept of liberty.
We forget than not too long ago our television, our record player (or 8-track!), our camera, our phones, and anything else all lived in separate worlds. You couldn’t listen to a podcast or stream a news channel across the platform of your choice. Or, more importantly, you couldn’t make a podcast or vlog from the platform at all.
Layer 1 and layer 2 interoperability allows your phone to stream a video connection to loved ones. It allows you to listen to a podcast. If you don’t like the selection of news channels, you can download and view another in the local medium of your choice.
It makes it extremely easy for manufacturers to create cheap and reliable tech that allows all of this. If one tries to make things too proprietary, other things wont’ work with it.
(Having said that, you can also see the creators intent and values of layer 1 and 2 technology. If you’ve ever setup an ethernet network or even a more modern WiFi network, it’s still a pretty localized technical process.)
Layers 1 and 2 are important because they are closest to us. And the bring the concepts of electronic freedom into our living room.
If you got this far past the title, you’re either a techie, or really bored, or both. But I think it’s a really important juxtaposition in understanding the current state of things.
Marshall McLuhan coined the term “The medium is the message” in the 1960’s book Understanding Media: The Extensions of Man. I haven’t read this book yet, although I’ve ordered it. I have watched a few of his interviews.
McLuhan’s point was that the overall effect of a communication medium is far more important that the specific message conveyed. The effect of television on humanity is for more important than a television show. An example I can think of from my generation is that MTV’s effect on youth was not so much based on the content, but on the overall effect of television’s ability to capture the attention of people our age and change our thoughts and values.
This thought emerged from the primordial ooze of electronic communication in the 1960’s. How profound is the message today? Just look around at people staring at their phones, or the number of phones hoisted in the air during a concert. Is what the people recording or reading nearly as important as the effect the smart phone has had on everyone? I look forward to reading more of McLuhan’s work.
I have personally found this assertion directly observable in the creation of this blog. Even after 2-3 blog entries I’m experiencing an increase in wonder and learning that I had back 20 years ago when the blog concept first manifested. This form of electronic medium seems to have a positive effect on me, at least.
So to connect dots with McLuhan, I am delving into my own related theory of tech with this blog: The intent of the creator, and values of the creator, have significant effect on the capabilities of a created technology. And further, if we understand the intent and values of the people who created the technology we can apply this info in ways that help us fulfil our own goals. This seems obvious but hopefully I’ll demonstrate that it isn’t always obvious. (It’s probable that none of this is original thought. I just haven’t found it illustrated anywhere else yet.)
The 3rd critical idea that enters this arena is the Open Systems Interconnection (OSI) model. The OSI model is a conceptual model that helps design and explain interoperability between systems. It is a good way of separating and identifying the technologies that are a part of nearly every aspect of our lives ‘these days’.
Because the OSI model describes 7 layers of communication medium, we can use it in concert with the prior concepts to start to figure out what happened, what’s currently going on, and where it can all lead. We can do this at all 7 levels. (And I’ll argue that there’s an 8th.)
Let’s squish all this together. We can parse the building blocks of our electronic experience using the OSI model. We can then use some of McLuhans ideas to analyze the effect of each of these aspects. We can also look at the intent and values of the creation/creators to gain further insight into how the mediums can be implemented or re-implemented.
Maybe we can identify some of the negative things happening, and come up with ways to fix them.
From a conceptual standpoint, the Internet developed from the ground up starting in the early 1900s. It started with theories of information and hard science (voltages, frequencies, and the like), that went really deep into math and science in ways that will give you tremendous respect for that $99 router at best buy.
The efforts then moved into methods of connectivity as the Cold War started in the 50’s and 60’s. It was during this time that a middle layer of building blocks was created, which ensured robust connectivity and flexibility in the network. After many different efforts and competing theories, the TCP/IP standard was implemented on January 1, 1983.
When the Internet was opened to the public, the upper layers developed in earnest. Some early protocols like SMTP (email) and FTP (file transfer) were updated and still exist. Others, like Gopher, were replaced by HTTP and the now-ubiquitous World Wide Web in 1991. Protocols that were easy and effective survived. Others were updated or dropped.
(Techies out there will see that this process was really about climbing the ladder of what is now called the OSI model.)
The takeaway here is that the Internet developed like this:
How can we make something that works using physics, electricity, and connectivity?
How can we arrange this thing so that it works well even when powerful entities don’t want it to work at all?
Now that everything is connected, how can we share information in a way that accessible and easy on a huge scale?
It was in this 3rd bullet point where things started to go really well. As we shall see in subsequent posts, it was also where the seeds of censorship were sewn.
As the use of the Internet via the World Wide Web quickly (but also slowly) exploded, finding information was about to become an issue. The Web was a library of documents with no organization and no index. Think of a stack of 100 unlabeled books in a dark closet and all you have is a flashlight to find what you’re looking for.
The problem was so obvious that multiple entities began solving it in multiple ways when the web was but a flicker in a PhD’s desktop.
Jumpstation went live in 1993 when the total number of websites was less than 200.
Webcrawler, Lycos, and Excite followed in 1994. When the total number of sites grew to around 20,000.
Altavista and Yahoo started in ’95-’96 when the total number of sites was still well below 500,000.
Dogpile and AskJeeves started around ’96. And, of course, the 1-Trillion-plus gorilla of Google started in 1998 at the dawn of the d0t-com era. By then the number of web pages was well into the millions and tens of millions.
An undefinably large amount of work by brilliant individuals had created the ultimate information sharing tool. In around 80 years, humanity had gone from theory on a page to an invention that had the potential to fundamentally change the direction of our history.
But it would also do so by subtly changing fundamentals and definitions that had been taken for granted for quite a while. In making so many connections, we had created a very dynamic vessel for defining and changing what things mean.
This change could include the very intent of the invention and purpose for inventing it.
I was reading through my old blog archives the other day. As I parsed through the slow slog of daily entries, I realized just how bad things have become over the last 10 or so years. The internet used to be a positive place. Full of interesting things and cool information.
Back then, it wasn’t about likes or re-tweets/shares/broadcasts. It was just a fun way of putting some things out there that people might enjoy. There was less concern about how many people were interested in what you said(although you knew a certain number were), and you really didn’t have all that much information on how many did anyway.
I’ve also looked at old social media archives from the 2010 era, and the realization was similar.
This used to be fun….
What in the world happened?
I know ‘blogging’ is dated and all this sure seems like a hackneyed good-ol-days rant. But I think there’s more to it.
Let’s explore what happened. And further, let’s NOT resolve to fixing it. The internet is still an open and positive place if we’ll let it be. But we can’t build on the things that took it awry. We have to step back and look at the fundamentals.