It will always be fascinating to me how the written word is used to connect brains and facilitate change. Whether it’s the power of the Gospel to change lives–transmitted via letter and encoded in canon–or the open “blogosphere” of the early internet that bypassed the establishment media, little snippets of text seem to resonate with our brains.
I think what we have now is a demented version of that natural attraction. The algorithms that continue to rule our content and the bazillion dollar websites that have captured our eyeballs with their redefined funnels of truth are abusing our desire to be connected via words. But more on that later. We’re here to talk about 200+ year old essays about the government written by John Jay.
Federalist #2 can be summarized in these points:
We’re all in this together.
We always wanted to be together, until a few people threw that in to question with questionable motives.
God has blessed us with amazing resources and we are unified in that.
We share a common heritage of ancestry, language, religion, principles of government, and similar manners and customs.
The current government is inadequate in reflecting this unity, and the new Constitution will be better.
So the overwhelming sense of this paper is “unity”. Which brings up a couple interesting contrasts with today.
The Federal government has become huge and dominant over the states. I wonder if the argument for unity has come true in this drift to centralization? Has this top-heavy implementation of what the Federalist Papers advocated for actually resulted in unity?
Jay warned that these calls for disunity were to be treated with suspicion. Today, calls for unity are fewer and further between. Rather than accentuating common heritage of ancestry, language, religion, principles of government, similar manners and customs, we are increasingly called to identify with smaller groups, and find alternate identities. Once we find our identity group, vast volumes of recently brewed philosophy encourage us to be in conflict with other groups. And if the philosophy doesn’t speak to us, then the algorithms will.
Much more to think about. But until then, here’s an AI-generated picture of John Jay making a selfie for Federalist No 2.
As we move headlong down the rabbit hole that is AI, we are seeing quite a bit of fear and hyperbole. AI will cause the extinction of humanity, the mass elimination of jobs, and enable all sorts of world-ending scenarios.
Of course, these predictions could be true. AI is indeed a world changing, ‘disruptive’ technology. Personally, I haven’t seen a watershed with this much water shedding potential in my 30-year tech career. But I think much of the negativity has a bit of a chicken little tone to it.
The cultural touchstones created in fiction and entertainment haven’t helped. Whether it be the brutal, soulless violence of the Terminator or the quiet, plodding evil of HAL9000, we have been set up to see AI with suspicion. These predictions are a warning, but they are fiction.
While AI’s potential is somewhat unprecedented, there is a historic template for the fear it’s causing. The fear of encryption in the 90’s had much of the same tone.
The advent of high-grade encryption available to the masses was very similar to what we’re seeing with AI. Fast, general-purpose processing power was suddenly available to all, and different applications were also suddenly available.
One of those applications was PGP, short for “Pretty Good Privacy”. It combined RSA (Rivest-Shamir-Adleman) assymetric key encryption with IDEA (International Data Encryption Algorithm) symmetric key encryption to provide an extremely strong encryption application. He released it for free, which meant that everyone was suddenly able to encrypt data with military-grade encryption.
(He originally used home-brew BassOmatic symmetric key encryption but switched after significant holes were pointed out)
The result was a crazy mismatch in government policy and reality. Encryption was considered a “munition” in federal law. And exporting it could result in heavy fines. So technically, exporting the PGP program outside the US could have resulted in over a million dollars in fines and 10 years in prison for each instance.
Needless to say, this was a stark offset in commercial availability and consequence. The versatility and power of a home computer suddenly gave it the same legal classification as an automatic rifle, an F-16, or plutonium. Historically, even the late Radio Shack didn’t sell such things. Now it suddenly did.
This incongruity with reality was boiled down to illustrative extremes. Emailing or posting a website with 4 lines of perl code could expose you to a million dollar fine and 10 years of jail time. Illegal activity with a computer is now commonly understood, but the idea of a quirky, nerdy home computer being a munition was ludicrous at the time.
The very browser you’re using right now would have been legally the same as a sidewinder missile in the eyes of the law.
While it was widely recognized that this situation was just a little bit crazy–the Gubmit backed off a bit in the mid-’90s–the rhetoric and hyperbole from the Gubmint only escalated. There were many sky-is-falling scenarios about what the world would look like now that everyone in the world could encrypt data.
In 1993 the “clipper chip” was introduced. The Government wanted to put a back door in every encryption device, so they could have access to secure communication:
“Without the Clipper Chip, law enforcement will lose its current capability to conduct lawfully-authorized electronic surveillance.” – Georgetown professor Dorothy Denning
The FBI Director in 1997 famously said:
“Uncrackable encryption will allow drug lords, spies, terrorists and even violent gangs to communicate about their criminal intentions without fear of outside intrusion. They will be able to maintain electronically stored evidence of their criminal conduct far from the reach of any law enforcement agency in the world.” – FBI Director Louis Freeh
Even as recently as 2011 and 2014 law enforcement agencies were saying things like this.
“We are on a path where, if we do nothing, we will find ourselves in a future where the information that we need to prevent attacks could be held in places that are beyond our reach… Law enforcement at all levels has a public safety responsibility to prevent these things from happening.” – FBI General Counsel Valerie Caproni (2011)
And in the modern version of the PGP issue, when the government wanted unfettered access to your iPhone:
“There are going to be some very serious crimes that we’re just not going to be able to progress in the way that we’ve been able to over the last 20 years.” -Deputy Attorney General James Cole (2014)
It’s not hard to see the parallels in rhetoric in what we’re seeing in AI. It’s in our nature to have extreme visions of a future where the worst-case-scenario reigns. But it’s also a tool used in creating a policy that someone wants. Sometimes they skip the rhetoric and just tell you what they want:
NSA Director Michael Rogers (2014): Speaking at a cybersecurity conference: “I don’t want a back door… I want a front door. And I want the front door to have multiple locks. Big locks.”
So, it’s worth noting that, like encryption, the extreme rhetoric we’re seeing in AI is probably not just to get clicks and readers. It reflects a policy push, both overt and covert. The warnings in public were matched by very serious efforts behind the scenes to address the fear of a world with readily-available, strong encryption.
These efforts were revealed in the Edward Snowden leaks. They included secret partnerships with private companies, extensive efforts to break encryption, and covert efforts to sabotage proprietary and open-source projects. This could be the subject of an entire post. But you can bet similar efforts are being implemented due to the perceived threat of AI.
Did these extensive efforts help us? It’s impossible to know. Like the barking dog who thinks his efforts thwart a mass murder by the mailman, it could be an illusory correlation. Or the end of the world could have been prevented multiple times.
So as we move into the world of AI, we may be moving into unprecedented scale of impact. However the situation is very precedented. It’s best to push past the scary rhetoric and get into the messy world of actual analysis and prediction.
We should also understand that massive, massive amounts of capital and human effort are working behind the scenes in ways we may never know about.
The legislature in which I serve is now considering a distracted driving ban. I’m not going to go into that bill, but it does usher in my next topic. We are about to see a crazy revolution in user interfaces driven by AI. It will render touchscreens useless and change the whole topic of distracted driving. Which will be a good thing.
It’s already cliché to say that AI will change everything. So we will just talk about this one part.
The AI revolution started in earnest a few months ago with the release of ChatGPT. Yes there have been many milestones before that, but I really think ChatGPT will be seen as the turning point that brought AI into the common thought and wove into the zeitgeist of tech. Every day people are already using ChatGPT to get things done.
While ChatGPT is amazing, and the corresponding efforts by Google et al will be equally amazing, probably the most profound revolution will be in the way we interact with technology. I see it as the 3rd big phase of this topic. Let’s look at that, but first, let’s look at the first two phases.
The first phase was stationary and tactile.
Computing tech was stationary primarily because it was huge. It took tons of space. It needed tons of power and cooling. In some mainframe implementations, it actually needed water! Even as it shrunk, it still needed a desk top.
When ‘luggables’ and laptops started to enter the picture, they were still just mobile implementations of a stationary experience. That still holds today.
Computing was also tactile. I’m not sure why this is, but I think it was just assumed to be good design. The keyboard made a satisfying “click” when you used it. The mouse was weighted well, and the buttons gave good feedback. Your fingers could provide information to your brain about what was happening.
And this is an important point. Tactile interfaces could provide feedback and context without looking at the interfaces. Keyboards had a ‘nub’ on certain keys so you could put your hands in position without taking your eyes off the screen or document. The clicks of the mouse and keyboard could report that an input had been received without a visual confirmation.
Tactile interfaces leveraged one of our 5 major senses to interact with the technology. This is a big deal, and it’s an aspect well understood by the gaming community. The tactile interface of your mouse or keyboard can mean (virtual) life or death, and there’s a huge market of expensive implementations.
Losing the tactile interface eliminated an entire sense from our interaction with technology. It has likely cost us hundreds of thousands of lives and drastically reduced our productivity, which leads us to the 2nd phase.
The second phase (which we’re in) is mobile and visual.
In the second phase, technology got small enough to be portable.
The early part of the second phase maintained it’s ancestor’s tactile aspect.
Because tactile interfaces didn’t require your eyeballs, it didn’t affect your other overall interaction with the real world. Bike couriers would famously ride through large cities while texting on a phone in their pocket. Kids could text in class without being discovered. It was a unique blend of the first and second phase.
Then Apple perfected the touchscreen, and the 2nd phase picked up momentum.
The touchscreen eliminated all of the tactile aspects of the prior world. You still touched the technology, but you had to look at it. And once those eyeballs descended to the touchscreen, they never left. And once those eyeballs were locked on the screen, the level of distraction skyrocketed.
However it’s important to realize that we are not distracted because we want to be. We’re distracted because we have to be. Once the touchscreen entered the picture (get it?), we were forever distracted by design. As our world has become app dependent, it has made distraction a requirement to exist.
It doesn’t matter, however. AI will prove to be a better way of interacting with technology, and it can replace touchscreens by simply being added to the mix. As a disruptive tech, it will easily crush the touchscreen in terms of interaction.
The Coming age of the AI-driven Interface
I know, I know. Siri stinks. Siri is buggy, gets words wrong, is Apple-centric and is really limited in usefulness. But Siri and Alexa and such are mere shadows of what is to come.
Imagine saying “Hey [phone], can you plan a route to the beach, and try to find a way that avoids normal spring break traffic jams. Oh, and take us through some of the more scenic drives. Maybe a small historic church or small town courthouse. Also, make a playlist for the trip that is good for driving with some of my family’s favorite songs…be sure and add beach boys into the list as we get closer to the coast.”
This would take an hour or two of pre-planning in the current interaction model. It will require many clicks and taps of the keyboard. If you did it on a phone or tablet, it would probably take even longer.
More importantly, it would be impossible to do while driving. And you would be hyper-focused on the interaction wherever you did it. But AI interactivity will completely free up your time and focus. You will be able to ask this question to your car, phone, or device we haven’t contemplated yet. And you’ll be able to do it after leaving while you’ve already got your eyes on the road hands on 10 and 2.
The AI-driven interface will insert itself between you and the technology. It will eliminate the need to touch and look, and will handle all the abstraction of bouncing between apps.
There is much more to think about in all this. The best model is to imagine a college student who is always there ready to interact with your phone for you when needed. Think of how that would change your interaction with day-to-day technology. You’ll only look at the screen when needed, and you’ll only be distracted when you choose to be.
Every few months, a crypto exchange fails. Crypto exchanges–the sites and systems where you can convert regular currency to cryptocurrency–have a habit of failing, and the results have been a steady stream of people losing money. The losses seem to get bigger and bigger.
The recent failure of BTX is by far the largest and most spectacular. It’s probably the most damaging to the perception of crypto due to its intersection with political drama. But it’s by no means the only failure:
Mt Gox 2014– The first really large exchange fell to hacking. The site, originally intended as a place to trade “Magic: The Gathering” cards was hacked. The losses at the time were $450 million dollars.
QuadrigaCX 2018 – A Canadian site went down when the owner mysteriously died, and investigators subsequently couldn’t find any of the funds.
Thodex 2021 – A Turkish exchange went down when the owner disappeared. Loss was upwards of $2 billion.
These are just 3 failures in a list of 50+ since 2009. Many people have lost billions of dollars, and many people have illicitly benefitted. When this happens, it is usually tacitly called a failure of crypto itself. All of cryptocurrency–as a technology–is called a scam, pyramid scheme, etc.
But why is this happening, and does it mean there’s no future in blockchain based money? We can answer that, but first we have to look at a basic principle in crypto.
Cryptography is always vulnerable at the endpoints.
This is a key principle in understanding how to secure things with cryptography. If you want to defeat cryptography, attacking cryptography itself is hard. Attacking things outside cryptography is easier.
For example, a message that hasn’t been encrypted yet can be read. So you can compromise the computer and read it before it’s encrypted. Or you can setup a “man in the middle” attack where you secretly put yourself between the sender and the cryptographic system.
In its most simple implementation, a “rubber hose” attack can be used to physically threaten a person and get the key to decrypt something. This may be applied illicitly and illegally, or even by a legitimate court who threatens jail time for not revealing a key.
In all of these examples, the method of cryptography is secure. It’s the ‘stuff’ around it that’s not. So an attacker attacks that ‘stuff’. So it’s not enough to use good crypto. You have to secure the other ‘stuff’ around it, as well.
Modern cryptography is secure. Blockchain technology is secure. If you maintain crytpocurrency in a wallet, and you take basic steps to secure it, you’ll be fine. Wallets, as an endpoint, are very secure.
Wallets are super cool. They make you feel like James Bond when using them. But Blockchain wallets are also hard and unforgiving. Maintaining a blockchain wallet of any kind is not for the faint of heart. If you lose it, forget the password, or mismanage your wallet in other ways, you lose everything.
As a result, many average people are delegating that duty to an online exchange and leaving huge sums of money in them. But exchanges are just websites that reside at the endpoints of blockchain technology. So they can be compromised.
What we’re seeing in current cryptocurrency and blockchain scandals is that nobody is securing the endpoints.
Until wallets become more fool-proof, we must anticipate a continued reliance on exchanges. And these endpoints must be hardened to prevent loss. There are some ways to do that, and I’ll discuss that next.
Layer 5, the Session layer, is really a nuts-n-bolts layer that is difficult to explain in context. And the implications are minimal, so we’re going to skip over that. There are some relevant points to VPN and authentication, but the real good parts are in layer 6.
Layer 6 is the subject of a lot of debate. And boy, is it a geeky debate. Think “how many Picards can dance on the head of a pin” kinda debate. I won’t get too into it other than to say some people would disagree with my thoughts on this.
(For my fellow geeks who would disagree…here’s it is in a nutshell: Layer 6 is where data interoperability lives–compression, encryption, text conversion, etc. The line is a little blurry with layer 7. But in my interpretation, the mechanisms, programs, and code in layer 7 may be very different, but they are reading the same data and successfully interpret it. That action indicates a lower abstraction layer, and that is layer is layer 6.)
For the non technical, that means a JPG file from your home security camera also works in a web browser. And an MPG from your iPhone can also play on your Android. And a PDF can work across multiple devices.
So the most important implication for freedom and democracy at Level 6 is a standard form of media across all devices and programs.
Not too long ago, you couldn’t watch a video from England (and most of Europe) on a device in the US. They used two completely separate video standards. You needed a different video tape, a different VCR, and a different TV (or “tele”).
During the Soviet Coup attempt, the coup leaders captured President Gorbachev and sent him away to “rest”. While captive, Gorbachev’s son-in-law Anatoly secretly recorded 4 messages from him to the outside, and cut the physical video tape up so that it could be smuggled out.
Interestingly, I am unable to find any video from this tape online. That may very well be due to the limited ability to encode the format of the tape, since the format of that tape was probably very….Soviet. But you can see a video of Gorbachev describing his captivity by clicking here.
I read somewhere that the written word will be humanity’s only true form of time travel. It is a method of communicating across the ages thoughts directly from one mind to another. When you read a word, the writer reaches out across minutes, years, or eons and puts those thoughts directly into your head for examination.
Video, audio, and other means have a similar effect but there are so many competing factors. The written word is the most direct method.
Is it any wonder, then, that God chose writing to convey His will across all these hundreds of years? The uniqueness of this medium is manifest in the gravity of the phrase “the Word of God”. Indeed, in John 1 God Himself is defined as “The Word”.
Ok so shove me in the shallow waters here. I’m only fixin’ to talk about the government.
It is a trip to think that the direction of government can be completely changed with words. Pamphlets, newspapers, doorhangers, and facebook post can all convey thoughts to a critical mass of people that will change the course of history. It’s why our First Amendment is so important.
There are many other examples of how this has happened throughout history, but I want to focus on a collection of moderately obscure works called the Federalist papers.
Most people are familiar with what the Federalist Papers are. But it seems like very few people (including me) have actually dug into them to any degree. This is understandable given the sheer volume and density of the material. But in a time when the validity of the US Constitution is questioned at the highest levels of government, I think it might be a good exercise to dig in to such a thorough effort to justify it’s adoption.
In a nutshell, the Federalist papers (simply labeled “Federalist No. #” where # is a Roman numeral) were a series of articles across several New York newspapers arguing in favor of a new Constitution vs. the old Articles of Confederation.
Federalist #1 was published in “The Independent Journal” on October 27, 1787, and was written Alexander Hamilton. One month earlier, the new Constitution had been proposed.
It has been frequently remarked that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.
-“Publius” in Federalist No. 1
Here are my summarizing points:
This nation is unique. There are fundamental questions on how a country should function that are being addressed here and nowhere else.
We’re at a turning point here. Either we update how our country is going to work, or things will descend into chaos. It will affect all of humanity negatively.
That chaos will create (and is creating) power for some people, so they will oppose a new constitution.
They will try to hold on to this power by painting the new constitution as oppressive.
Some people are also planning to dissolve or split the union of states to create more power for themselves.
It takes a strong government to protect liberty.
I’m writing under a pseudonym so that the arguments will stand for themselves.
We’re going to go over the utility of a unified, federal government for your political prosperity.
We’re going to show how the existing Articles of Confederation aren’t good enough.
We will show how a new government as proposed in the Constitution is necessary to preserve our original ideas for a republic. We will list the reasons why it will do this.
We will compare it with the current state constitutions.
We will also show how a unified republic as defined in the Constitution is more secure.
So essentially Federalist #1 is an opening statement for the series. It talks a little about why they are being written, what they hope to accomplish, and what points they are going to make.
It’s interesting to think that some of the most fundamental values and structures of our country were once open to such debate. I’m looking forward to digging in further.
Like any child of the 80’s who’s into tech, I’m fascinated by the idea of self driving cars. The only thing cooler would be flying cars, but it seems we’ll have to keep crawling before we can fly.
Thanks to Google and Tesla, self-driving automobiles are now a real possibility. In fact, Tesla’s communication and Musk’s relative record of success have made it more than a possibility. It’s an expectation. There is now a baked-in expectation that self-driving cars will revolutionize the world of transportation.
However, the reality is proving to be more difficult. Delays and complications abound. And predicting timelines has become foolhardy.
The obvious issue is that driving is very, very complicated and unpredictable. So much so that human minds get routinely confused. It just makes sense that artificial minds will have the same issues. It makes sense that this is a very difficult problem to solve and it will take awhile to do so.
But there may be ways to speed up the process. And there may be tragic events that will suddenly slow down the process by many years or decades if we’re not smart about all this. Let’s start with the latter.
Lidar and Radar and Cameras, oh my! The complexity in feeding information to self-driving AI is very complicated. It should give us new appreciation for our own 5 senses. Source: Boston Consulting Group
Artificial Intelligence needs tons of data to learn. This means that AI engines will have to spend huge amounts of time to get the tons of data needed to learn how to drive our roads. I think we’re learning that our roads are more complicated and unpredictable than we thought. Which means the AI behind autonomous driving will take more and more data.
Telsa uses “shadow mode testing”, in which the AI engine pretends to drive a car, and its decisions are tested against the actions of a real driver. The large number of Telsa drivers helps in this regard.
But this illustrates the problem. Artificial Intelligence and machine learning depend on mistakes. The systems make mistakes and learn from them. They makes an enormous amount of mistakes. The more complex the environment, the more data you need. And the more data you need, the more mistakes will be required to generate that data.
Yet driving is dangerous. A mistake in driving can cost lives. So the question quickly becomes “what is our tolerance for mistakes by self-driving cars?” Are we willing to sacrifice lives so that cars can learn to drive themselves?
I think the answer is very likely to be “no”, and probably a more resounding “no” than we anticipate. There have already been some episodes of loss-of-life related to autonomous cars. And there have been odd attempts to cover up some close calls. But the day we have a high profile event–a loss of a family of four, a school bus accident, an elderly veteran run over–public (and legislative) opinion will shift quickly against the current tech.
An episode like that will be tragic for the individuals involved, but it will also set the autonomous vehicle effort back for decades. People are too important, and this tech has too much potential to let that happen. So what can we do?
When it comes to autonomous driving, all the attention is on the cars themselves. That make sense given the ‘cool factor’ and the agency of the companies making the cars. This is where the work is.
Hardly any attention is paid to the technology of roads themselves. Even less attention is paid to the technology of planning, design, and construction of the roads. It’s just accepted that the roads are what they are.
A huge part of advancing autonomous vehicles, I think, is to develop a set of standards and guidelines that will certify a road for autonomous cars. Autonomous driving should require this certification. It would include things such as:
Universal, standard lane markers, including curb and hash marks in turns
Assisting sensors in blind corners and unprotected turns
Redesign of crosswalks and bike lanes to protect pedestrians and bikers
Standardization of other vulnerable areas such as loading areas for passengers
Indicators of places where pedestrians and other vulnerable individuals are likely to be present. “high caution” areas that will tell AI to enter a heightened state of precision and sensitivity.
Appending or tagging some of this information to the GPS standards
Federal and state highways would be pretty easy to outfit, as they already follow standard guidelines. The obvious issue will be local and rural roads.
Google’s self-driving project addresses part of this situation by mapping every area’s detail ahead of time. This approach has a similar effect, in that it ‘certifies’ every road by documenting its features ahead of time. There are a couple problems with this, however.
First, it is a daunting task. Even with the resources at Google’s disposal, it is nearly impossible to map every road. Indeed, Google street view still misses huge chunks of coverage despite the significant effort to cover everything. And you can’t underestimate the tendency in some places to consider mapping a privacy concern.
Second, streets change and those changes could have significant implications. Using street view as a reference, it’s not uncommon to find places that haven’t been visited for many years…again despite a very comprehensive effort by Google.
Adding and adopting street standards and certification would help Google’s approach and speed up the process.
There are no guarantees in life. Walking out the door has its own level of risk. But when it comes to life-and death safety, we should mitigate these risks as much as practically possible. When it comes to AI, autonomous driving, and self-driving cars, I think it’s obvious that a set of standards and a requirement for certification is required. Moving in this direction now will allow us to leapfrog both delays in adoption, and tragedy in achieving adoption.
Layers 3 and 4 are then “network” and “transport” layer, respectively.
While layer 1 and 2 had to do with local traffic, the next two layers create the standards and protocols by which all these local networks can talk to each other (“internetworking”). They scale to a global scale.
OSI Layer 3 – Network Layer
The network layer that currently dominates the world is the IP protocol. Nearly everyone has heard of an IP address by now, probably in frustration as they tried to configure a home device or internet connection.
The power of the IP protocol is in its superior route-ability. There have been other protocols that work well in certain circumstances, but IP proved to be the brilliant solution that literally created the internet.
IPs superior routability stems from it’s super simple addressing scheme, in which you take a bunch of numbers (an address), apply another set of numbers (called a mask) and end up with a neatly sliced network-host dileneation.
You can think of the network as the street you live on, and the host as the house in which you live. In the following examples, the blue is the network/street, and the red is the host/house.
But IP addressing is far more powerful than a street address, in that the networks can then further be sliced up using masks. A mask is another set of numbers that defines which part of the address is being addressed. So you could further say:
Where orange is the locality and green is the larger area. This slicing can get even more granular and complex as needed.
I won’t risk complicating a simple and elegant system in trying to address it in one blog post. But the upshot is that millions of devices called routers can reliably and effectively transport huge amounts of data through multiple other routers and back. It’s not uncommon for traffic to go through 10-20 routers on its way to a destination.
OSI Layer 4 – Transport Layer
Layer 4 is the layer that defines a conversation. Take this human example of TCP (Transmission Control Protocol):
Sally: Hello is this joe?
Joe: Yes! This is joe.
Sally: Great! Here’s some info…..*garbled*
Joe: I’m sorry, can you repeat that? Also can you speak a little slower?
Sally: Sure…here…is….some…information…for you. Did you get that?
Joe: Yes I got it. I will deliver it to the appropriate party.
This conversation is a representation of a TCP conversation that happens trillions of times a day. In contrast, here’s an example of UDP (User Datagram Protocol:
Sally: Hey, I’m shouting this to Joe! Joe, if you can hear me here some information for you!
Both of these conversations do essentially the same thing, but with a different set of requirements. These requirements are defined by a layer 4 protocol.
Across layer 3 and 4, there are several protocols and combinations of protocols that assist communication. They help control speed of transmission, choosing the best route between hosts, and several other critical functions that help ensure data gets from point A to point B.
Implications for Freedom and Democracy
The creation of a redundant, reliable packet-switched (vs. circuit-switched) network of communications was created for two reasons. First, the number of computers in the world was very small, and people needed access to them without being physically present. Second, the military needed a way to maintain control of nuclear resources and communications in the even of a nuclear war.
These two goals are somewhat in dispute. And that makes complete sense given the supply of movie plots where scientific discovery was unwittingly being used for the military. It’s pretty obvious that everyone involved had their own goals in mind.
But, the implications for today are clear. Using these technologies, you can send data reliably from a very localized device to another very localized device anywhere around the world. We are seeing this play out now in Ukraine. This is a unique enough situation that I will post about it separately.
Because these systems were designed to create access via large scale, they ensure that anyone in the world can communicate with another. They can do this directly and without reliance on a mediator or central 3rd party.
Because these systems were designed, at some level, to survive nuclear hostilities, they are inherently robust and redundant. Getting in the way of these connections is very hard.
Freedom loves communication and the free flow of information. Indeed, it depends on it. Layers 1-4 are great enablers of freedom.
So let’s look at the first 2 layers of the OSI model. These are the “Physical” layer and the “Data Link” layer. These layers are separate and distinct, but in practical application they are usually part of the same implementation.
The physical layer (layer 1) is, as it implies, concerned with the physical elements of a connection. Voltages, pin-outs, mechanical considerations, connectors, etc. In the case of fiber optics, it deals with wavelengths and supported configurations such as single or multi mode. Because it is physical, this layer tends to be focused on local networks or networks with fewer participants.
Because of the radically different technologies out there at the physical layer, there is not really a standard unit of data. It can be very different depending on topology.
The Data Link layer (layer 2) defines the formats of data that will be communicated on top of layer 1. How data is divided up into chunks, how things on a local network will be addressed (such as MAC addresses), and how a system will know what chunk of data belongs to which device. These are usually called “frames”.
For layer 1 and 2, most people will have used twisted pair ethernet or various forms of WiFi. If you used a computer at work in the 80’s or 90’s, you may have used other forms of Ethernet or even Token Ring. If you’re really fancy, you may have fiber ethernet coming to your house.
Whatever the case, the implication for freedom and democracy is interoperability. Layer 1 and 2 ensure that your devices can talk to each other at the most basic level.
Information is very important to freedom and democracy. Indeed, it’s why the 1st amendment exists and has been upheld and bolstered as technology advances. Being able to consume and produce information freely is vital to the concept of liberty.
We forget than not too long ago our television, our record player (or 8-track!), our camera, our phones, and anything else all lived in separate worlds. You couldn’t listen to a podcast or stream a news channel across the platform of your choice. Or, more importantly, you couldn’t make a podcast or vlog from the platform at all.
Layer 1 and layer 2 interoperability allows your phone to stream a video connection to loved ones. It allows you to listen to a podcast. If you don’t like the selection of news channels, you can download and view another in the local medium of your choice.
It makes it extremely easy for manufacturers to create cheap and reliable tech that allows all of this. If one tries to make things too proprietary, other things wont’ work with it.
(Having said that, you can also see the creators intent and values of layer 1 and 2 technology. If you’ve ever setup an ethernet network or even a more modern WiFi network, it’s still a pretty localized technical process.)
Layers 1 and 2 are important because they are closest to us. And the bring the concepts of electronic freedom into our living room.
If you got this far past the title, you’re either a techie, or really bored, or both. But I think it’s a really important juxtaposition in understanding the current state of things.
Marshall McLuhan coined the term “The medium is the message” in the 1960’s book Understanding Media: The Extensions of Man. I haven’t read this book yet, although I’ve ordered it. I have watched a few of his interviews.
McLuhan’s point was that the overall effect of a communication medium is far more important that the specific message conveyed. The effect of television on humanity is for more important than a television show. An example I can think of from my generation is that MTV’s effect on youth was not so much based on the content, but on the overall effect of television’s ability to capture the attention of people our age and change our thoughts and values.
This thought emerged from the primordial ooze of electronic communication in the 1960’s. How profound is the message today? Just look around at people staring at their phones, or the number of phones hoisted in the air during a concert. Is what the people recording or reading nearly as important as the effect the smart phone has had on everyone? I look forward to reading more of McLuhan’s work.
I have personally found this assertion directly observable in the creation of this blog. Even after 2-3 blog entries I’m experiencing an increase in wonder and learning that I had back 20 years ago when the blog concept first manifested. This form of electronic medium seems to have a positive effect on me, at least.
So to connect dots with McLuhan, I am delving into my own related theory of tech with this blog: The intent of the creator, and values of the creator, have significant effect on the capabilities of a created technology. And further, if we understand the intent and values of the people who created the technology we can apply this info in ways that help us fulfil our own goals. This seems obvious but hopefully I’ll demonstrate that it isn’t always obvious. (It’s probable that none of this is original thought. I just haven’t found it illustrated anywhere else yet.)
The 3rd critical idea that enters this arena is the Open Systems Interconnection (OSI) model. The OSI model is a conceptual model that helps design and explain interoperability between systems. It is a good way of separating and identifying the technologies that are a part of nearly every aspect of our lives ‘these days’.
Because the OSI model describes 7 layers of communication medium, we can use it in concert with the prior concepts to start to figure out what happened, what’s currently going on, and where it can all lead. We can do this at all 7 levels. (And I’ll argue that there’s an 8th.)
Let’s squish all this together. We can parse the building blocks of our electronic experience using the OSI model. We can then use some of McLuhans ideas to analyze the effect of each of these aspects. We can also look at the intent and values of the creation/creators to gain further insight into how the mediums can be implemented or re-implemented.
Maybe we can identify some of the negative things happening, and come up with ways to fix them.