OSI Layer 7 – Where Freedom Takes a Detour

Previous articles:

So up to this point, we’ve highlighted the wonky technical stuff illustrating just how resilient the Internet is. And how intentional that resilience is. There is a tremendous amount of intelligence and money applied to make sure that communication amongst many entities can happen, no matter what.

Now we arrive at the final layer, the application layer, and we will feel right at home at this layer. This layer is where humans interact with the technology. It’s where bazillions of dollars are made. It’s where all the magic happens and what all the fuss is about. Without layer 7, none of the other 6 layers matter.

The first major applications were things like:

  • Telnet (1969) which allowed users to remotely access a computer system as if they were sitting in front of it.
  • Email (1971) created by Ray Tomlinson, which upgraded his prior mainframe-only email to work over the network.
  • FTP (1971) File Transfer Protocol, somewhat self-explanatory
  • Usenet (1979) a bulletin board like system that allowed users to post, read, and reply to public messages
  • IRC (1988) by Jarkko Oikarinen, Internet Relay Chat allowed users to join chat rooms and interact with each other directly.
  • Gopher (1991) by Mark P. McCahill, was a spiritual pre-cursor to the web, allowing users to find documents.
While initially proving popular, the Gopher protocol has largely disappeared. With around 300 servers that will never quit, ever.

These early applications laid the groundwork for the rich ecosystem we have today. However, they were relatively static and specialized in their functions. Each served a specific purpose: Telnet for remote access, Email for messaging, FTP for file transfer, and so on. While groundbreaking for their time, these applications were limited in their flexibility and scope.

Then came the game-changer: HTTP (Hypertext Transfer Protocol) and the World Wide Web.

Developed by Tim Berners-Lee in 1989 and publicly released in 1991, HTTP and the web created the ultimate flexible addition to Layer 7. Unlike its predecessors, the web wasn’t designed for a single, specific purpose. Instead, it provided a general-purpose platform that could be adapted for almost any type of application.

What made HTTP and the web so revolutionary was their simplicity and extensibility:

  1. Hypertext: The ability to link documents (which eventually became pages ) together created a web of information, allowing users to interact with documents.
  2. Statelessness: Each request-response cycle is independent, which simplified server design and allowed for easy scaling.
  3. Content Types: HTTP could serve various types of content (text, images, audio, video), making it incredibly versatile.
  4. Client-Server Model: This separation of concerns allowed for rapid innovation on both ends.

The web’s flexibility meant that developers could create applications that were previously unimaginable. Suddenly, you could have:

  • Online stores (Amazon, 1994)
  • Search engines (Google, 1998)
  • Social networks (Facebook, 2004)
  • Video streaming platforms (YouTube, 2005)
  • Microblogging services (Twitter, 2006)

All of these diverse applications run on the same underlying protocol and infrastructure. This flexibility allowed for rapid innovation and democratized app development. Anyone with a basic understanding of HTML and a web server could create content accessible to millions.

Moreover, as web technologies evolved (with the introduction of JavaScript, CSS, and more sophisticated backend technologies), the web became even more powerful. Modern web applications can do almost anything a desktop application can do, from complex data processing to real-time communication.

Layer 7 is incredibly flexible. It’s the wild west of the OSI model, where applications can do pretty much anything they want. Want to create a social media platform? Layer 7. A video streaming service? Layer 7. A decentralized cryptocurrency network? You guessed it, Layer 7.

This flexibility and power is a double-edged sword when it comes to freedom and democracy. On one hand, it has given voices to millions, allowed for the free flow of information on an unprecedented scale, and enabled incredible innovation that can empower individuals and communities. On the other hand, it has led to massive, centralized platforms that now control much of our online experience, with near-total control over what they present to us.

So despite this flexibility and capacity for innovation, we’re increasingly using fewer and fewer sites for more and more of our online activities. Facebook, Google, Twitter, TikTok – these giants have become the primary gateways through which many people experience the internet. This concentration means these limited sites have an outsized influence on what information we see and how we interact online.

This is all because of something called Metcalfe’s Law. This law states that the value of a network is proportional to the square of the number of connected users. In other words, the more people use a site, the more valuable it becomes due to the network of people it provides. This creates a powerful feedback loop – people join because that’s where everyone else is, which makes the site even more attractive to new users.

Metcalfe’s law, showing the value of a network (and the number of cat memes) as the number of users increases. (source R Uzwyshyn)

So as use of these centralized websites increases, their ability to control content also increases.  They have used this power in questionable ways, even colluding with the government to determine what people see.

But while some have tried to sue  and others have tried passing laws, the real answer lies in the technology itself.  While layer 7 makes it possible to censor a single site, the rest of the OSI ‘stack’ makes true censorship nearly impossible.  “The Net interprets censorship as damage and routes around it.” (A quote attributed to John Gilmore in the early 90’s.)  And the ability to route around it is significant.

Next we’ll talk about ways the internet could heal itself from it’s current ailments.

Federalist #2 – A Peptalk for Unity

It will always be fascinating to me how the written word is used to connect brains and facilitate change.  Whether it’s the power of the Gospel to change lives–transmitted via letter and encoded in canon–or the open “blogosphere” of the early internet that bypassed the establishment media, little snippets of text seem to resonate with our brains.

I think what we have now is a demented version of that natural attraction.  The algorithms that continue to rule our content and the bazillion dollar websites that have captured our eyeballs with their redefined funnels of truth are abusing our desire to be connected via words.  But more on that later.  We’re here to talk about 200+ year old essays about the government written by John Jay.

The beginning of Federalist No. 2. Clearly not SEO optimized.

Federalist #2 can be summarized in these points:

  • We’re all in this together.
  • We always wanted to be together, until a few people threw that in to question with questionable motives.
  • God has blessed us with amazing resources and we are unified in that.
  • We share a common heritage of ancestry, language, religion, principles of government, and similar manners and customs.
  • The current government is inadequate in reflecting this unity, and the new Constitution will be better.

So the overwhelming sense of this paper is “unity”.  Which brings up a couple interesting contrasts with today.

The Federal government has become huge and dominant over the states.  I wonder if the argument for unity has come true in this drift to centralization?  Has this top-heavy implementation of what the Federalist Papers advocated for actually resulted in unity?

Jay warned that these calls for disunity were to be treated with suspicion.  Today, calls for unity are fewer and further between.  Rather than accentuating common heritage of ancestry, language, religion, principles of government, similar manners and customs, we are increasingly called to identify with smaller groups, and find alternate identities.  Once we find our identity group, vast volumes of recently brewed philosophy encourage us to be in conflict with other groups.   And if the philosophy doesn’t speak to us, then the algorithms will. 

Much more to think about.  But until then, here’s an AI-generated picture of John Jay making a selfie for Federalist No 2.

Clearly these were the OG influencers. Do you think today’s “influencers” could make wigs and pirate shirts fashionable?

AI and the Fear of Technologies Past

As we move headlong down the rabbit hole that is AI, we are seeing quite a bit of fear and hyperbole.  AI will cause the extinction of humanity, the mass elimination of jobs, and enable all sorts of world-ending scenarios.

Of course, these predictions could be true.  AI is indeed a world changing, ‘disruptive’ technology.  Personally, I haven’t seen a watershed with this much water shedding potential in my 30-year tech career.  But I think much of the negativity has a bit of a chicken little tone to it. 

The cultural touchstones created in fiction and entertainment haven’t helped.  Whether it be the brutal, soulless violence of the Terminator or the  quiet, plodding evil of HAL9000, we have been set up to see AI with suspicion.  These predictions are a warning, but they are fiction.

While AI’s potential is somewhat unprecedented, there is a historic template for the fear it’s causing.  The fear of encryption in the 90’s had much of the same tone.

The advent of high-grade encryption available to the masses was very similar to what we’re seeing with AI.  Fast, general-purpose processing power was suddenly available to all, and different applications were also suddenly available.

One of those applications was PGP, short for “Pretty Good Privacy”.  It combined RSA (Rivest-Shamir-Adleman) assymetric key encryption with  IDEA (International Data Encryption Algorithm) symmetric key encryption to provide an extremely strong encryption application. He released it for free, which meant that everyone was suddenly able to encrypt data with military-grade encryption. 

(He originally used home-brew BassOmatic symmetric key encryption but switched after significant holes were pointed out) 

BassOmatic encryption was supposed to scramble data like Dan Aykroyd scrambled this fish.  But like the lid in the video, the crypto had a weakness and was soon replaced by a more secure standard.

The result was a crazy mismatch in government policy and reality.  Encryption was considered a “munition” in federal law.  And exporting it could result in heavy fines.  So technically, exporting the PGP program outside the US could have resulted in over a million dollars in fines and 10 years in prison for each instance.

Needless to say, this was a stark offset in commercial availability and consequence.  The versatility and power of a home computer suddenly gave it the same legal classification as an automatic rifle, an F-16, or plutonium.  Historically, even the late Radio Shack didn’t sell such things.  Now it suddenly did.

This incongruity with reality was boiled down to illustrative extremes.  Emailing or posting a website with 4 lines of perl code could expose you to a million dollar fine and 10 years of jail time.  Illegal activity with a computer is now commonly understood, but the idea of a quirky, nerdy home computer being a munition was ludicrous at the time.

The very browser you’re using right now would have been legally the same as a sidewinder missile in the eyes of the law.  

The absurdity in Federal law created by the advancement of the CPU was illustrated by this t-shirt. Wearing it was wearing a ‘munition’ as defined by the law.

While it was widely recognized that this situation was just a little bit crazy–the Gubmit backed off a bit in the mid-’90s–the rhetoric and hyperbole from the Gubmint only escalated.  There were many sky-is-falling scenarios about what the world would look like now that everyone in the world could encrypt data.

In 1993 the “clipper chip” was introduced.  The Government wanted to put a back door in every encryption device, so they could have access to secure communication: 

“Without the Clipper Chip, law enforcement will lose its current capability to conduct lawfully-authorized electronic surveillance.” – Georgetown professor Dorothy Denning

The FBI Director in 1997 famously said:

“Uncrackable encryption will allow drug lords, spies, terrorists and even violent gangs to communicate about their criminal intentions without fear of outside intrusion. They will be able to maintain electronically stored evidence of their criminal conduct far from the reach of any law enforcement agency in the world.” – FBI Director Louis Freeh 

Even as recently as 2011 and 2014 law enforcement agencies were saying things like this.

“We are on a path where, if we do nothing, we will find ourselves in a future where the information that we need to prevent attacks could be held in places that are beyond our reach… Law enforcement at all levels has a public safety responsibility to prevent these things from happening.” – FBI General Counsel Valerie Caproni (2011) 

And in the modern version of the PGP issue, when the government wanted unfettered access to your iPhone:

“There are going to be some very serious crimes that we’re just not going to be able to progress in the way that we’ve been able to over the last 20 years.” -Deputy Attorney General James Cole (2014)

It’s not hard to see the parallels in rhetoric in what we’re seeing in AI.  It’s in our nature to have extreme visions of a future where the worst-case-scenario reigns.  But it’s also a tool used in creating a policy that someone wants.   Sometimes they skip the rhetoric and just tell you what they want:

NSA Director Michael Rogers (2014): Speaking at a cybersecurity conference: “I don’t want a back door… I want a front door. And I want the front door to have multiple locks. Big locks.” 

So, it’s worth noting that, like encryption, the extreme rhetoric we’re seeing in AI is probably not just to get clicks and readers.  It reflects a policy push, both overt and covert.  The warnings in public were matched by very serious efforts behind the scenes to address the fear of a world with readily-available, strong encryption.

These efforts were revealed in the Edward Snowden leaks.  They included secret partnerships with private companies, extensive efforts to break encryption, and covert efforts to sabotage proprietary and open-source projects.   This could be the subject of an entire post.  But you can bet similar efforts are being implemented due to the perceived threat of AI.

Encryption became a character all its own in many thrillers, even in the relatively early years of 1992. This movie highlights the view of encryption as a tech that can end the world as we know it. It also highlights an odd Aykroyd connection in this post.

Did these extensive efforts help us?  It’s impossible to know.  Like the barking dog who thinks his efforts thwart a mass murder by the mailman, it could be an illusory correlation.  Or the end of the world could have been prevented multiple times.  

So as we move into the world of AI, we may be moving into unprecedented scale of impact.  However the situation is very precedented.  It’s best to push past the scary rhetoric and get into the messy world of actual analysis and prediction.

We should also understand that massive, massive amounts of capital and human effort are working behind the scenes in ways we may never know about.

 

How AI Will Make us Safer: Ending Distracted Driving

The legislature in which I serve is now considering a distracted driving ban.  I’m not going to go into that bill, but it does usher in my next topic.  We are about to see a crazy revolution in user interfaces driven by AI.  It will render touchscreens useless and change the whole topic of distracted driving.  Which will be a good thing.

It’s already cliché to say that AI will change everything. So we will just talk about this one part.

The AI revolution started in earnest a few months ago with the release of ChatGPT. Yes there have been many milestones before that, but I really think ChatGPT will be seen as the turning point that brought AI into the common thought and wove into the zeitgeist of tech.  Every day people are already using ChatGPT to get things done.

While ChatGPT is amazing, and the corresponding efforts by Google et al will be equally amazing, probably the most profound revolution will be in the way we interact with technology.  I see it as the 3rd big phase of this topic.  Let’s look at that, but first, let’s look at the first two phases.

The first phase was stationary and tactile.  

Computing tech was stationary primarily because it was huge.  It took tons of space.  It needed tons of power and cooling.  In some mainframe implementations, it actually needed water!  Even as it shrunk, it still needed a desk top.

When ‘luggables’ and laptops started to enter the picture, they were still just mobile implementations of a stationary experience.  That still holds today.

luggable
A Compaq “luggable” computer. The early ancestor of mobile computing. (Tiziano Garuti)

Computing was also tactile.  I’m not sure why this is, but I think it was just assumed to be good design.  The keyboard made a satisfying “click” when you used it.  The mouse was weighted well, and the buttons gave good feedback.  Your fingers could provide information to your brain about what was happening.

And this is an important point.  Tactile interfaces could provide feedback and context without looking at the interfaces. Keyboards had a ‘nub’ on certain keys so you could put your hands in position without taking your eyes off the screen or document.  The clicks of the mouse and keyboard could report that an input had been received without a visual confirmation.

Tactile interfaces leveraged one of our 5 major senses to interact with the technology.  This is a big deal, and it’s an aspect well understood by the gaming community.  The tactile interface of your mouse or keyboard can mean (virtual) life or death, and there’s a huge market of expensive implementations.

gaming gear
Gaming gear is a rich pageant for the senses. Including plenty of tactile feedback. (razer.com)

Losing the tactile interface eliminated an entire sense from our interaction with technology.  It has likely cost us hundreds of thousands of lives and drastically reduced our productivity, which leads us to the 2nd phase.

The second phase (which we’re in) is mobile and visual.

In the second phase, technology got small enough to be portable.  

The  early part of the second phase maintained it’s ancestor’s tactile aspect. 

Because tactile interfaces didn’t require your eyeballs, it didn’t affect your other overall interaction with the real world.  Bike couriers would famously ride through large cities while texting on a phone in their pocket.  Kids could  text in class without being discovered.  It was a unique blend of the first and second phase.

Then Apple perfected the touchscreen, and the 2nd phase picked up momentum.

blackberry example
The late Blackberry was an amazing smart phone with a great tactile interface. The keyboard and “click wheel” made it easy to interact with. It was a unique bridge between phase 1 and 2.

The touchscreen eliminated all of the tactile aspects of the prior world.  You still touched the technology, but you had to look at it.  And once those eyeballs descended to the touchscreen, they never left.  And once those eyeballs were locked on the screen, the level of distraction skyrocketed.

However it’s important to realize that we are not distracted because we want to be.  We’re distracted because we have to be.  Once the touchscreen entered the picture (get it?), we were forever distracted by design.  As our world has become app dependent, it has made distraction a requirement to exist. 

touchscreen elevator
An elevator touchscreen. While these may facilitate an easier design, what’s the advantage over push buttons? Watch how much time and attention these screens get the next time you visit one. (src: Disney Military Blog)

Unfortunately, this trend has continued to the point where we’re completely surrounded by touchscreens.  There is at lease some recognition that this is a bad thing.   And it’s unlikely that this will change on it’s own. Touchscreen design is the hegemony of interaction.

It doesn’t matter, however.  AI will prove to be a better way of interacting with technology, and it can replace touchscreens by simply being added to the mix.  As a disruptive tech, it will easily crush the touchscreen in terms of interaction.

The Coming age of the AI-driven Interface

I know, I know.  Siri stinks.  Siri is buggy, gets words wrong, is Apple-centric and is really limited in usefulness.  But Siri and Alexa and such are mere shadows of what is to come.

Imagine saying “Hey [phone], can you plan a route to the beach, and try to find a way that avoids normal spring break traffic jams.  Oh, and take us through some of the more scenic drives.  Maybe a small historic church or small town courthouse.  Also, make a playlist for the trip that is good for driving with some of my family’s favorite songs…be sure and add beach boys into the list as we get closer to the coast.”

This would take an hour or two of pre-planning in the current interaction model.  It will require many clicks and taps of the keyboard.  If you did it on a phone or tablet, it would probably take even longer.

More importantly, it would be impossible to do while driving.  And you would be hyper-focused on the interaction wherever you did it.  But AI interactivity will completely free up your time and focus.  You will be able to ask this question to your car, phone, or device we haven’t contemplated yet.  And you’ll be able to do it after leaving while you’ve already got your eyes on the road hands on 10 and 2.

The AI-driven interface will insert itself between you and the technology.  It will eliminate the need to touch and look, and will handle all the abstraction of bouncing between apps.

We will clearly have more time to spend on fashion, and our hair, in the age of AI interfaces.

There is much more to think about in all this.  The best model is to imagine a college student who is always there ready to interact with your phone for you when needed.  Think of how that would change your interaction with day-to-day technology.  You’ll only look at the screen when needed, and you’ll only be distracted when you choose to be.

 

Vulnerability of Endpoints and The Problem With Cryptocurrencies

Every few months, a crypto exchange fails.  Crypto exchanges–the sites and systems where you can convert regular currency  to cryptocurrency–have a habit of failing, and the results have been a steady stream of people losing money.  The losses seem to get bigger and bigger.

The recent failure of  BTX is by far the largest and most spectacular.  It’s probably the most damaging to the perception of crypto due to its intersection with political drama.  But it’s by no means the only failure:

Mt Gox 2014– The first really large exchange fell to hacking.  The site, originally intended as a place to trade “Magic: The Gathering” cards was hacked.  The losses at the time were $450 million dollars.

QuadrigaCX 2018 –  A Canadian site went down when the owner mysteriously died, and investigators subsequently couldn’t find any of the funds.

Thodex 2021 – A Turkish exchange went down when the owner disappeared.  Loss was upwards of $2 billion.

BTX Failure
The Mt Gox failure was one of the first high profile exchange failures. Mt Gox, short for “Magic, the Gathering Online eXchange” was originally designed for trading cards, what could go wrong? (src: Stanford Review)

These are just 3 failures in a list of 50+ since 2009.  Many people have lost billions of dollars, and many people have illicitly benefitted.   When this happens, it is usually tacitly called a failure of crypto itself.   All of cryptocurrency–as a technology–is called a scam, pyramid scheme, etc.

But why is this happening, and does it mean there’s no future in blockchain based money?  We can answer that, but first we have to look at a basic principle in crypto.

Cryptography is always vulnerable at the endpoints.

This is a key principle in understanding how to secure things with cryptography.  If you want to defeat cryptography, attacking cryptography itself is hard.  Attacking things outside cryptography is easier.  

enigma
The movie The Imitation Game shows just how hard it was to defeat a cryptosystem itself. It took plenty of luck and brainpower. Modern crypto systems are not vulnerable like this.

For example, a message that hasn’t been encrypted yet can be read.  So you can compromise the computer and read it before it’s encrypted.  Or you can setup a “man in the middle” attack where you secretly put yourself between the sender and the cryptographic system.

In its most simple implementation, a “rubber hose” attack can be used to physically threaten a person and get the key to decrypt something.  This may be applied illicitly and illegally, or even by a legitimate court who threatens jail time for not revealing a key.

In all of these examples, the method of cryptography is secure.   It’s the ‘stuff’ around it that’s not.  So an attacker attacks that ‘stuff’.  So it’s not enough to use good crypto.  You have to secure the other ‘stuff’ around it, as well.

Modern cryptography is secure.  Blockchain technology is secure.  If you maintain crytpocurrency in a wallet, and you take basic steps to secure it, you’ll be fine.  Wallets, as an endpoint, are very secure. 

Wallets are super cool.  They make you feel like James Bond when using them.  But Blockchain wallets are also hard and unforgiving.   Maintaining a blockchain wallet of any kind is not for the faint of heart.  If you lose it, forget the password, or mismanage your wallet in other ways, you lose everything. 

Crypto Wallets
Trezor and Ledger Hardware Wallets.  regularguy.eth/Unsplash

As a result, many average people are delegating that duty to an online exchange and leaving huge sums of money in them. But exchanges are just websites that reside at the endpoints of blockchain technology.  So they can be compromised.

What we’re seeing in current cryptocurrency and blockchain scandals is that nobody is securing the endpoints.  

Until wallets become more fool-proof, we must anticipate a continued reliance on exchanges.  And these endpoints must be hardened to prevent loss.  There are some ways to do that, and I’ll discuss that next.

 

OSI Model Layer 5 and 6, Freedom and Democracy

Previous articles:

Layer 5, the Session layer, is really a nuts-n-bolts layer that is difficult to explain in context.  And the implications are minimal, so we’re going to skip over that.  There are some relevant points to VPN and authentication, but the real good parts are in layer 6.

Layer 6 is the subject of a lot of debate.  And boy, is it a geeky debate.  Think “how many Picards can dance on the head of a pin” kinda debate.  I won’t get too into it other than to say some people would disagree with my thoughts on this.

(For my fellow geeks who would disagree…here’s it is in a nutshell:  Layer 6 is where data interoperability lives–compression, encryption, text conversion, etc.  The line is a little blurry with layer 7.  But in my interpretation, the mechanisms, programs, and code in layer 7 may be very different, but they are reading  the same data and successfully interpret it.  That action indicates a lower abstraction layer, and that  is layer is layer 6.)

For the non technical, that means a JPG file from your home security camera also works in a web browser.  And an MPG from your iPhone can also play on your Android.  And a PDF can work across multiple devices.

Tsunami Wave
The tragedy of the 2011 Tsunami in Japan can be felt by billions of people due to the standard formats used to capture the event.

So the most important implication for freedom and democracy at Level 6 is a standard form of media across all devices and programs.

Not too long ago, you couldn’t watch a video from England (and most of Europe) on a device in the US.  They used two completely separate video standards.  You needed a different video tape, a different VCR, and a different TV (or “tele”).

During the Soviet Coup attempt, the coup leaders captured President Gorbachev and sent him away to “rest”.  While captive, Gorbachev’s son-in-law Anatoly secretly recorded 4 messages from him to the outside, and cut the physical video tape up so that it could be smuggled out.

Interestingly, I am unable to find any video from this tape online.  That may very well be due to the limited ability to encode the format of the tape, since the format of that tape was probably very….Soviet.  But you can see a video of Gorbachev describing his captivity by clicking here.

In today’s world, interoperability means you can watch a drone video from the battlefields of Ukraine, watch a debate in Australian Parliament, or see video directly from protestors.

In short, this layer is what has opened up the media to anyone with a phone and an internet connection.  The implication is as profound as the printing press on freedom and democracy.

Next Article:

The Federalist Number 1. Blogging Our Way to Modern Democracy

I read somewhere that the written word will be humanity’s only true form of time travel.  It is a method of communicating across the ages thoughts directly from one mind to another.  When you read a word, the writer reaches out across minutes, years, or eons and puts those thoughts directly into your head for examination.

Video, audio, and other means have a similar effect but there are so many competing factors.  The written word is the most direct method.

Is it any wonder, then, that God chose writing to convey His will across all these hundreds of years?  The uniqueness of this medium is manifest in the gravity of the phrase “the Word of God”.  Indeed, in John 1 God Himself is defined as “The Word”.

Gutenberg Bible
A 1455 Gutenberg Bible in the Library of Congress. While the age of the document itself is amazing, the fact that the words reach out through thousands of years is really hard to fathom.

Ok so shove me in the shallow waters here.  I’m only fixin’ to talk about the government.

It is a trip to think that the direction of government can be completely changed with words.  Pamphlets, newspapers, doorhangers, and facebook post can all convey thoughts to a critical mass of people that will change the course of history.  It’s why our First Amendment is so important.

The American Crisis
Pamphlets like “The American Crisis”, by Thomas Paine were critical in motivating people during the Revolutionary War.

There are many other examples of how this has happened throughout history, but I want to focus on a collection of moderately obscure works called the Federalist papers.

Most people are familiar with what the Federalist Papers are.  But it seems like very few people (including me) have actually dug into them to any degree.  This is understandable given the sheer volume and density of the material. But in a time when the validity of the US Constitution is questioned at the highest levels of government, I think it might be a good exercise to dig in to such a thorough effort to justify it’s adoption.

In a nutshell, the Federalist papers (simply labeled “Federalist No. #” where # is a Roman numeral) were a series of articles across several New York newspapers arguing in favor of a new Constitution vs. the old Articles of Confederation.

Articles of Confederation
Our original “Articles of Confederation”. Note the relatively boring lede: “To All to Whom”? Doesn’t really work on a bumper sticker.  It was worth scrapping the whole thing just to get the much better “We the People”.

Federalist #1 was published in “The Independent Journal” on October 27, 1787, and was written Alexander Hamilton.  One month earlier, the new Constitution had been proposed.

There really are some good tidbits in these documents.  You can read Federalist number 1 by clicking on this link.  

It has been frequently remarked that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.

-“Publius” in Federalist No. 1

Here are my summarizing points:

  • This nation is unique.  There are fundamental questions on how a country should function that are being addressed here and nowhere else.
  • We’re at a turning point here.  Either we update how our country is going to work, or things will descend into chaos.  It will affect all of humanity negatively.
  • That chaos will create (and is creating) power for some people, so they will oppose a new constitution.
  • They will try to hold on to this power by painting the new constitution as oppressive.
  • Some people are also planning to dissolve or split the union of states to create more power for themselves.
  • It takes a strong government to protect liberty.
  • I’m writing under a pseudonym so that the arguments will stand for themselves.
  • We’re going to go over the utility of a unified, federal government for your political prosperity.
  • We’re going to show how the existing Articles of Confederation aren’t good enough.
  • We will show how a new government as proposed in the Constitution is necessary to preserve our original ideas for a republic. We will list the reasons why it will do this.
  • We will compare it with the current state constitutions.
  • We will also show how a unified republic as defined in the Constitution is more secure.

So essentially Federalist #1 is an opening statement for the series.  It talks a little about why they are being written, what they hope to accomplish, and what points they are going to make.

It’s interesting to think that some of the most fundamental values and structures of our country were once open to such debate.  I’m looking forward to digging in further.

Self Driving Cars and the Need for Standard Roads

Like any child of the 80’s who’s into tech, I’m fascinated by the idea of self driving cars.  The only thing cooler would be flying cars, but it seems we’ll have to keep crawling before we can fly.

Thanks to Google and Tesla, self-driving automobiles are now a real possibility.  In fact, Tesla’s communication  and Musk’s relative record of success have made it more than a possibility.  It’s an expectation.  There is now a baked-in expectation that self-driving cars will revolutionize the world of transportation.

However, the reality is proving to be more difficult.  Delays and complications abound.  And predicting timelines has become foolhardy.

The obvious issue  is that driving is very, very complicated and unpredictable.  So much so that human minds get routinely confused.  It just makes sense that artificial minds will have the same issues.  It makes sense that this is a very difficult problem to solve and it will take awhile to do so.

But there may be ways to speed up the process.  And there may be tragic events that will suddenly slow down the process by many years or decades if we’re not smart about all this.  Let’s start with the latter.

Self driving sensorsLidar and Radar and Cameras, oh my! The complexity in feeding information to self-driving AI is very complicated. It should give us new appreciation for our own 5 senses. Source: Boston Consulting Group

Artificial Intelligence needs tons of data to learn.  This means that AI engines will have to spend huge amounts of time to get the tons of data needed to learn how to drive our roads.  I think we’re learning that our roads are more complicated and unpredictable than we thought.  Which means the AI behind autonomous driving will take more and more data.

Telsa uses “shadow mode testing”, in which the AI engine pretends to drive a car, and its decisions are tested against the actions of a real driver.  The large number of Telsa drivers helps in this regard.

But this illustrates the problem.  Artificial Intelligence and machine learning depend on mistakes.  The systems make mistakes and learn from them.  They makes an enormous amount of mistakes.   The more complex the environment, the more data you need.  And the more data you need, the more mistakes will be required to generate that data.

Yet driving is dangerous.  A mistake in driving can cost lives.  So the question quickly becomes “what is our tolerance for mistakes by self-driving cars?”  Are we willing to sacrifice lives so that cars can learn to drive themselves?

I think the answer is very likely to be “no”, and probably a more resounding “no” than we anticipate.  There have already been some episodes of loss-of-life related to autonomous cars.  And there have been odd attempts to cover up some close calls.  But the day we have a high profile event–a loss of a family of four, a school bus accident, an elderly veteran run over–public (and legislative) opinion will shift quickly against the current tech.

An episode like that will be tragic for the individuals involved, but it will also set the autonomous vehicle effort back for decades.  People are too important, and this tech has too much potential to let that happen.  So what can we do?

Tesla view of pedestrians
Tesla’s visualization of pedestrians. Super cool…but what if these simple icons represented someone you love? A spouse, grandparent, or child? Are we ready to trust tech to this? Src: DirtyTesla Youtube

When it comes to autonomous driving, all the attention is on the cars themselves.  That make sense given the ‘cool factor’ and the agency of the companies making the cars.  This is where the work is.

Hardly any attention is paid to the technology of roads themselves.  Even less attention is paid to the technology of planning, design, and construction of the roads.  It’s just accepted that the roads are what they are.

A huge part of advancing autonomous vehicles, I think, is to develop a set of standards and guidelines that will certify a road for autonomous cars.  Autonomous driving should require this certification.  It would include things such as:

  • Universal, standard lane markers, including curb and hash marks in turns
  • Assisting sensors in blind corners and unprotected turns
  • Redesign of crosswalks and bike lanes to protect pedestrians and bikers
  • Standardization of other vulnerable areas such as loading areas for passengers
  • Indicators of places where pedestrians and other vulnerable individuals are likely to be present.  “high caution” areas that will tell AI to enter a heightened state of precision and sensitivity.
  • Appending or tagging some of this information to the GPS standards

Federal and state highways would be pretty easy to outfit, as they already follow standard guidelines.  The obvious issue will be local and rural roads.

Google’s self-driving project addresses part of this situation by mapping every area’s detail ahead of time.  This approach has a similar effect, in that it ‘certifies’ every road by documenting its features ahead of time.  There are a couple problems with this, however.

Google NYC map
In essence Google’s Waymo ‘pre-certifies’ areas by training AI in the area and creating extensive maps.

First, it is a daunting task.  Even with the resources at Google’s disposal, it is nearly impossible to map every road.  Indeed, Google street view still misses huge chunks of coverage despite the significant effort to cover everything.   And you can’t underestimate the tendency in some places to consider mapping a privacy concern.

Second, streets change and those changes could have significant implications.  Using street view as a reference, it’s not uncommon to find places that haven’t been visited for many years…again despite a very comprehensive effort by Google.

Adding and adopting street standards and certification would help Google’s approach and speed up the process.

Retro Self Driving
Interestingly, many retro-fantastic illustrations imply standard highway markings for self driving cars. It’s fun to see just how close these visions are in other ways. (Gunther Radtke)

There are no guarantees in life.  Walking out the door has its own level of risk.  But when it comes to life-and death safety, we should mitigate these risks as much as practically possible.  When it comes to AI, autonomous driving, and self-driving cars, I think it’s obvious that a set of standards and a requirement for certification is required.  Moving in this direction now will allow us to leapfrog both delays in adoption, and tragedy in achieving adoption.

 

OSI Model Layer 3 and 4, Freedom and Democracy

Previous article:

Layers 3 and 4 are then “network” and “transport” layer, respectively.

While layer 1 and 2 had to do with local traffic, the next two layers create the standards and protocols by which all these local networks can talk to each other (“internetworking”).  They scale to a global scale.

OSI Layer 3 – Network Layer

The network layer that currently dominates the world is the IP protocol.  Nearly everyone has heard of an IP address by now, probably in frustration as they tried to configure a home device or internet connection.

The power of the IP protocol is in its superior route-ability.  There have been other protocols that work well in certain circumstances, but IP proved to be the brilliant solution that literally created the internet.

IPs superior routability stems from it’s super simple addressing scheme, in which you take a bunch of numbers (an address), apply another set of numbers (called a mask) and end up with a neatly sliced network-host dileneation.

Old School Computing
I’m pretty sure that’s a subnet cheat-sheet.
Photography: Yves Tessier 1972

You can think of the network as the street you live on, and the host as the house in which you live.  In the following examples, the blue is the network/street, and the red is the host/house.

3472 Oak Street

10.29.44.6

But IP addressing is far more powerful than a street address, in that the networks can then further be sliced up using masks.  A mask is another set of numbers that defines which part of the address is being addressed.  So you could further say:

Cleveland, Ohio

10.28.44.6

Where orange is the locality and green is the larger area.  This slicing can get even more granular and complex as needed.

I won’t risk complicating a simple and elegant system in trying to address it in one blog post.  But the upshot is that millions of devices called routers can reliably and effectively transport huge amounts of data through multiple other routers and back.  It’s not uncommon for traffic to go through 10-20 routers on its way to a destination.

OSI Layer 4 – Transport Layer

Layer 4 is the layer that defines a conversation.  Take this human example of TCP (Transmission Control Protocol):

Sally: Hello is this joe?

Joe: Yes!  This is joe.

Sally: Great!  Here’s some info…..*garbled*

Joe: I’m sorry, can you repeat that?  Also can you speak a little slower?

Sally: Sure…here…is….some…information…for you.  Did you get that?

Joe:  Yes I got it. I will deliver it to the appropriate party.

Operators
While this pictures not a perfect analogy, TCP is responsible for making connections between IPs, making them appropriately, and ensuring that no information is garbled or lost.
Src: Seattle Municipal Archives

This conversation is a representation of a TCP conversation that happens trillions of times a day.   In contrast, here’s an example of UDP (User Datagram Protocol:

Sally:  Hey, I’m shouting this to Joe!  Joe, if you can hear me here some information for you!

loudspeaker
In contrast, UDP is a way to send a message to another machine without a connection or any guarantee that they will hear it, or hear it clearly.

Both of these conversations do essentially the same thing, but with a different set of requirements.  These requirements are defined by a layer 4 protocol.

Across layer 3 and 4, there are several protocols and combinations of protocols that assist communication.  They help control speed of transmission, choosing the best route between hosts, and several other critical functions that help ensure data gets from point A to point B.

Implications for Freedom and Democracy

The creation of a redundant, reliable packet-switched (vs. circuit-switched) network of communications was created for two reasons.  First, the number of computers in the world was very small, and people needed access to them without being physically present.  Second, the military needed a way to maintain control of nuclear resources and communications in the even of a nuclear war.

These two goals are somewhat in dispute.  And that makes complete sense given the supply of movie plots where scientific discovery was unwittingly being used for the military.   It’s pretty obvious that everyone involved had their own goals in mind.

Outbreak Monkey
Like the internet, both man and monkey in the movie Outbreak were either a military asset or a way to save the world. But in reality they were both.
Src: Warner Bros

But, the implications for today are clear.  Using these technologies, you can send data reliably from a very localized device to another very localized device anywhere around the world.  We are seeing this play out now in Ukraine.  This is a unique enough situation that I will post about it separately.

Because these systems were designed to create access via large scale, they ensure that anyone in the world can communicate with another.  They can do this directly and without reliance on a mediator or central 3rd party.

Because these systems were designed, at some level, to survive nuclear hostilities, they are inherently robust and redundant.  Getting in the way of these connections is very hard.

Freedom loves communication and the free flow of information.  Indeed, it depends on it.  Layers 1-4 are great enablers of freedom.

Next articles:

 

 

OSI Model Layer 1 and 2, Freedom and Democracy

So let’s look at the first 2 layers of the OSI model.  These are the “Physical” layer and the “Data Link” layer.  These layers are separate and distinct, but in practical application they are usually part of the same implementation.

The physical layer (layer 1) is, as it implies, concerned with the physical elements of a connection.  Voltages, pin-outs, mechanical considerations, connectors, etc.  In the case of fiber optics, it deals with wavelengths and supported configurations such as single or multi mode.  Because it is physical, this layer tends to be focused on local networks or networks with fewer participants.

Because of the radically different technologies out there at the physical layer, there is not really a standard unit of data.  It can be very different depending on topology.

The IMP router
The very first router. While it technically includes layer 3 functions, this is the first device that let computers communicate at layer 1 and 2. (1969)

The Data Link layer (layer 2) defines the formats of data that will be communicated on top of layer 1.  How data is divided up into chunks, how things on a local network will be  addressed (such as MAC addresses), and how a system will know what chunk of data belongs to which device.  These are usually called “frames”.

For layer 1 and 2, most people will have used twisted pair ethernet or various forms of WiFi.  If you used a computer at work in the 80’s or 90’s, you may have used other forms of Ethernet or even Token Ring.  If you’re really fancy, you may have fiber ethernet coming to your house.

Whatever the case, the implication for freedom and democracy is interoperability.  Layer 1 and 2 ensure that your devices can talk to each other at the most basic level.

Ethernet Frame
An Ethernet Frame. When your coffeemaker talks about you to Alexa, this is the picture it uses.

Information is very important to freedom and democracy.  Indeed, it’s why the 1st amendment exists and has been upheld and bolstered as technology advances.  Being able to consume and produce information freely is vital to the concept of liberty.

We forget than not too long ago our television, our record player (or 8-track!), our camera, our phones, and anything else all lived in separate worlds.  You couldn’t listen to a podcast or stream a news channel across the platform of your choice.  Or, more importantly, you couldn’t make a podcast or vlog from the platform at all.

Layer 1 and layer 2 interoperability allows your phone to stream a video connection to loved ones.  It allows you to listen to a podcast.  If you don’t like the selection of news channels, you can download and view another in the local medium of your choice.

You could buy a bunch of cool stuff in 1989, but very few things talked to other things. Src: radioshackcatalogs.com

It makes it extremely easy for manufacturers to create cheap and reliable tech that allows all of this.  If one tries to make things too proprietary, other things wont’ work with it.

(Having said that, you can also see the creators  intent and values of layer 1 and 2 technology.  If you’ve ever setup an ethernet network or even a more modern WiFi network, it’s still a pretty localized technical process.)

Layers 1 and 2 are important because they are closest to us.  And the bring the concepts of electronic freedom into our living room.

Next articles: