AI and the Fear of Technologies Past

As we move headlong down the rabbit hole that is AI, we are seeing quite a bit of fear and hyperbole.  AI will cause the extinction of humanity, the mass elimination of jobs, and enable all sorts of world-ending scenarios.

Of course, these predictions could be true.  AI is indeed a world changing, ‘disruptive’ technology.  Personally, I haven’t seen a watershed with this much water shedding potential in my 30-year tech career.  But I think much of the negativity has a bit of a chicken little tone to it. 

The cultural touchstones created in fiction and entertainment haven’t helped.  Whether it be the brutal, soulless violence of the Terminator or the  quiet, plodding evil of HAL9000, we have been set up to see AI with suspicion.  These predictions are a warning, but they are fiction.

While AI’s potential is somewhat unprecedented, there is a historic template for the fear it’s causing.  The fear of encryption in the 90’s had much of the same tone.

The advent of high-grade encryption available to the masses was very similar to what we’re seeing with AI.  Fast, general-purpose processing power was suddenly available to all, and different applications were also suddenly available.

One of those applications was PGP, short for “Pretty Good Privacy”.  It combined RSA (Rivest-Shamir-Adleman) assymetric key encryption with  IDEA (International Data Encryption Algorithm) symmetric key encryption to provide an extremely strong encryption application. He released it for free, which meant that everyone was suddenly able to encrypt data with military-grade encryption. 

(He originally used home-brew BassOmatic symmetric key encryption but switched after significant holes were pointed out) 

BassOmatic encryption was supposed to scramble data like Dan Aykroyd scrambled this fish.  But like the lid in the video, the crypto had a weakness and was soon replaced by a more secure standard.

The result was a crazy mismatch in government policy and reality.  Encryption was considered a “munition” in federal law.  And exporting it could result in heavy fines.  So technically, exporting the PGP program outside the US could have resulted in over a million dollars in fines and 10 years in prison for each instance.

Needless to say, this was a stark offset in commercial availability and consequence.  The versatility and power of a home computer suddenly gave it the same legal classification as an automatic rifle, an F-16, or plutonium.  Historically, even the late Radio Shack didn’t sell such things.  Now it suddenly did.

This incongruity with reality was boiled down to illustrative extremes.  Emailing or posting a website with 4 lines of perl code could expose you to a million dollar fine and 10 years of jail time.  Illegal activity with a computer is now commonly understood, but the idea of a quirky, nerdy home computer being a munition was ludicrous at the time.

The very browser you’re using right now would have been legally the same as a sidewinder missile in the eyes of the law.  

The absurdity in Federal law created by the advancement of the CPU was illustrated by this t-shirt. Wearing it was wearing a ‘munition’ as defined by the law.

While it was widely recognized that this situation was just a little bit crazy–the Gubmit backed off a bit in the mid-’90s–the rhetoric and hyperbole from the Gubmint only escalated.  There were many sky-is-falling scenarios about what the world would look like now that everyone in the world could encrypt data.

In 1993 the “clipper chip” was introduced.  The Government wanted to put a back door in every encryption device, so they could have access to secure communication: 

“Without the Clipper Chip, law enforcement will lose its current capability to conduct lawfully-authorized electronic surveillance.” – Georgetown professor Dorothy Denning

The FBI Director in 1997 famously said:

“Uncrackable encryption will allow drug lords, spies, terrorists and even violent gangs to communicate about their criminal intentions without fear of outside intrusion. They will be able to maintain electronically stored evidence of their criminal conduct far from the reach of any law enforcement agency in the world.” – FBI Director Louis Freeh 

Even as recently as 2011 and 2014 law enforcement agencies were saying things like this.

“We are on a path where, if we do nothing, we will find ourselves in a future where the information that we need to prevent attacks could be held in places that are beyond our reach… Law enforcement at all levels has a public safety responsibility to prevent these things from happening.” – FBI General Counsel Valerie Caproni (2011) 

And in the modern version of the PGP issue, when the government wanted unfettered access to your iPhone:

“There are going to be some very serious crimes that we’re just not going to be able to progress in the way that we’ve been able to over the last 20 years.” -Deputy Attorney General James Cole (2014)

It’s not hard to see the parallels in rhetoric in what we’re seeing in AI.  It’s in our nature to have extreme visions of a future where the worst-case-scenario reigns.  But it’s also a tool used in creating a policy that someone wants.   Sometimes they skip the rhetoric and just tell you what they want:

NSA Director Michael Rogers (2014): Speaking at a cybersecurity conference: “I don’t want a back door… I want a front door. And I want the front door to have multiple locks. Big locks.” 

So, it’s worth noting that, like encryption, the extreme rhetoric we’re seeing in AI is probably not just to get clicks and readers.  It reflects a policy push, both overt and covert.  The warnings in public were matched by very serious efforts behind the scenes to address the fear of a world with readily-available, strong encryption.

These efforts were revealed in the Edward Snowden leaks.  They included secret partnerships with private companies, extensive efforts to break encryption, and covert efforts to sabotage proprietary and open-source projects.   This could be the subject of an entire post.  But you can bet similar efforts are being implemented due to the perceived threat of AI.

Encryption became a character all its own in many thrillers, even in the relatively early years of 1992. This movie highlights the view of encryption as a tech that can end the world as we know it. It also highlights an odd Aykroyd connection in this post.

Did these extensive efforts help us?  It’s impossible to know.  Like the barking dog who thinks his efforts thwart a mass murder by the mailman, it could be an illusory correlation.  Or the end of the world could have been prevented multiple times.  

So as we move into the world of AI, we may be moving into unprecedented scale of impact.  However the situation is very precedented.  It’s best to push past the scary rhetoric and get into the messy world of actual analysis and prediction.

We should also understand that massive, massive amounts of capital and human effort are working behind the scenes in ways we may never know about.

 

How AI Will Make us Safer: Ending Distracted Driving

The legislature in which I serve is now considering a distracted driving ban.  I’m not going to go into that bill, but it does usher in my next topic.  We are about to see a crazy revolution in user interfaces driven by AI.  It will render touchscreens useless and change the whole topic of distracted driving.  Which will be a good thing.

It’s already cliché to say that AI will change everything. So we will just talk about this one part.

The AI revolution started in earnest a few months ago with the release of ChatGPT. Yes there have been many milestones before that, but I really think ChatGPT will be seen as the turning point that brought AI into the common thought and wove into the zeitgeist of tech.  Every day people are already using ChatGPT to get things done.

While ChatGPT is amazing, and the corresponding efforts by Google et al will be equally amazing, probably the most profound revolution will be in the way we interact with technology.  I see it as the 3rd big phase of this topic.  Let’s look at that, but first, let’s look at the first two phases.

The first phase was stationary and tactile.  

Computing tech was stationary primarily because it was huge.  It took tons of space.  It needed tons of power and cooling.  In some mainframe implementations, it actually needed water!  Even as it shrunk, it still needed a desk top.

When ‘luggables’ and laptops started to enter the picture, they were still just mobile implementations of a stationary experience.  That still holds today.

luggable
A Compaq “luggable” computer. The early ancestor of mobile computing. (Tiziano Garuti)

Computing was also tactile.  I’m not sure why this is, but I think it was just assumed to be good design.  The keyboard made a satisfying “click” when you used it.  The mouse was weighted well, and the buttons gave good feedback.  Your fingers could provide information to your brain about what was happening.

And this is an important point.  Tactile interfaces could provide feedback and context without looking at the interfaces. Keyboards had a ‘nub’ on certain keys so you could put your hands in position without taking your eyes off the screen or document.  The clicks of the mouse and keyboard could report that an input had been received without a visual confirmation.

Tactile interfaces leveraged one of our 5 major senses to interact with the technology.  This is a big deal, and it’s an aspect well understood by the gaming community.  The tactile interface of your mouse or keyboard can mean (virtual) life or death, and there’s a huge market of expensive implementations.

gaming gear
Gaming gear is a rich pageant for the senses. Including plenty of tactile feedback. (razer.com)

Losing the tactile interface eliminated an entire sense from our interaction with technology.  It has likely cost us hundreds of thousands of lives and drastically reduced our productivity, which leads us to the 2nd phase.

The second phase (which we’re in) is mobile and visual.

In the second phase, technology got small enough to be portable.  

The  early part of the second phase maintained it’s ancestor’s tactile aspect. 

Because tactile interfaces didn’t require your eyeballs, it didn’t affect your other overall interaction with the real world.  Bike couriers would famously ride through large cities while texting on a phone in their pocket.  Kids could  text in class without being discovered.  It was a unique blend of the first and second phase.

Then Apple perfected the touchscreen, and the 2nd phase picked up momentum.

blackberry example
The late Blackberry was an amazing smart phone with a great tactile interface. The keyboard and “click wheel” made it easy to interact with. It was a unique bridge between phase 1 and 2.

The touchscreen eliminated all of the tactile aspects of the prior world.  You still touched the technology, but you had to look at it.  And once those eyeballs descended to the touchscreen, they never left.  And once those eyeballs were locked on the screen, the level of distraction skyrocketed.

However it’s important to realize that we are not distracted because we want to be.  We’re distracted because we have to be.  Once the touchscreen entered the picture (get it?), we were forever distracted by design.  As our world has become app dependent, it has made distraction a requirement to exist. 

touchscreen elevator
An elevator touchscreen. While these may facilitate an easier design, what’s the advantage over push buttons? Watch how much time and attention these screens get the next time you visit one. (src: Disney Military Blog)

Unfortunately, this trend has continued to the point where we’re completely surrounded by touchscreens.  There is at lease some recognition that this is a bad thing.   And it’s unlikely that this will change on it’s own. Touchscreen design is the hegemony of interaction.

It doesn’t matter, however.  AI will prove to be a better way of interacting with technology, and it can replace touchscreens by simply being added to the mix.  As a disruptive tech, it will easily crush the touchscreen in terms of interaction.

The Coming age of the AI-driven Interface

I know, I know.  Siri stinks.  Siri is buggy, gets words wrong, is Apple-centric and is really limited in usefulness.  But Siri and Alexa and such are mere shadows of what is to come.

Imagine saying “Hey [phone], can you plan a route to the beach, and try to find a way that avoids normal spring break traffic jams.  Oh, and take us through some of the more scenic drives.  Maybe a small historic church or small town courthouse.  Also, make a playlist for the trip that is good for driving with some of my family’s favorite songs…be sure and add beach boys into the list as we get closer to the coast.”

This would take an hour or two of pre-planning in the current interaction model.  It will require many clicks and taps of the keyboard.  If you did it on a phone or tablet, it would probably take even longer.

More importantly, it would be impossible to do while driving.  And you would be hyper-focused on the interaction wherever you did it.  But AI interactivity will completely free up your time and focus.  You will be able to ask this question to your car, phone, or device we haven’t contemplated yet.  And you’ll be able to do it after leaving while you’ve already got your eyes on the road hands on 10 and 2.

The AI-driven interface will insert itself between you and the technology.  It will eliminate the need to touch and look, and will handle all the abstraction of bouncing between apps.

We will clearly have more time to spend on fashion, and our hair, in the age of AI interfaces.

There is much more to think about in all this.  The best model is to imagine a college student who is always there ready to interact with your phone for you when needed.  Think of how that would change your interaction with day-to-day technology.  You’ll only look at the screen when needed, and you’ll only be distracted when you choose to be.

 

Vulnerability of Endpoints and The Problem With Cryptocurrencies

Every few months, a crypto exchange fails.  Crypto exchanges–the sites and systems where you can convert regular currency  to cryptocurrency–have a habit of failing, and the results have been a steady stream of people losing money.  The losses seem to get bigger and bigger.

The recent failure of  BTX is by far the largest and most spectacular.  It’s probably the most damaging to the perception of crypto due to its intersection with political drama.  But it’s by no means the only failure:

Mt Gox 2014– The first really large exchange fell to hacking.  The site, originally intended as a place to trade “Magic: The Gathering” cards was hacked.  The losses at the time were $450 million dollars.

QuadrigaCX 2018 –  A Canadian site went down when the owner mysteriously died, and investigators subsequently couldn’t find any of the funds.

Thodex 2021 – A Turkish exchange went down when the owner disappeared.  Loss was upwards of $2 billion.

BTX Failure
The Mt Gox failure was one of the first high profile exchange failures. Mt Gox, short for “Magic, the Gathering Online eXchange” was originally designed for trading cards, what could go wrong? (src: Stanford Review)

These are just 3 failures in a list of 50+ since 2009.  Many people have lost billions of dollars, and many people have illicitly benefitted.   When this happens, it is usually tacitly called a failure of crypto itself.   All of cryptocurrency–as a technology–is called a scam, pyramid scheme, etc.

But why is this happening, and does it mean there’s no future in blockchain based money?  We can answer that, but first we have to look at a basic principle in crypto.

Cryptography is always vulnerable at the endpoints.

This is a key principle in understanding how to secure things with cryptography.  If you want to defeat cryptography, attacking cryptography itself is hard.  Attacking things outside cryptography is easier.  

enigma
The movie The Imitation Game shows just how hard it was to defeat a cryptosystem itself. It took plenty of luck and brainpower. Modern crypto systems are not vulnerable like this.

For example, a message that hasn’t been encrypted yet can be read.  So you can compromise the computer and read it before it’s encrypted.  Or you can setup a “man in the middle” attack where you secretly put yourself between the sender and the cryptographic system.

In its most simple implementation, a “rubber hose” attack can be used to physically threaten a person and get the key to decrypt something.  This may be applied illicitly and illegally, or even by a legitimate court who threatens jail time for not revealing a key.

In all of these examples, the method of cryptography is secure.   It’s the ‘stuff’ around it that’s not.  So an attacker attacks that ‘stuff’.  So it’s not enough to use good crypto.  You have to secure the other ‘stuff’ around it, as well.

Modern cryptography is secure.  Blockchain technology is secure.  If you maintain crytpocurrency in a wallet, and you take basic steps to secure it, you’ll be fine.  Wallets, as an endpoint, are very secure. 

Wallets are super cool.  They make you feel like James Bond when using them.  But Blockchain wallets are also hard and unforgiving.   Maintaining a blockchain wallet of any kind is not for the faint of heart.  If you lose it, forget the password, or mismanage your wallet in other ways, you lose everything. 

Crypto Wallets
Trezor and Ledger Hardware Wallets.  regularguy.eth/Unsplash

As a result, many average people are delegating that duty to an online exchange and leaving huge sums of money in them. But exchanges are just websites that reside at the endpoints of blockchain technology.  So they can be compromised.

What we’re seeing in current cryptocurrency and blockchain scandals is that nobody is securing the endpoints.  

Until wallets become more fool-proof, we must anticipate a continued reliance on exchanges.  And these endpoints must be hardened to prevent loss.  There are some ways to do that, and I’ll discuss that next.

 

OSI Model Layer 5 and 6, Freedom and Democracy

Previous articles:

Layer 5, the Session layer, is really a nuts-n-bolts layer that is difficult to explain in context.  And the implications are minimal, so we’re going to skip over that.  There are some relevant points to VPN and authentication, but the real good parts are in layer 6.

Layer 6 is the subject of a lot of debate.  And boy, is it a geeky debate.  Think “how many Picards can dance on the head of a pin” kinda debate.  I won’t get too into it other than to say some people would disagree with my thoughts on this.

(For my fellow geeks who would disagree…here’s it is in a nutshell:  Layer 6 is where data interoperability lives–compression, encryption, text conversion, etc.  The line is a little blurry with layer 7.  But in my interpretation, the mechanisms, programs, and code in layer 7 may be very different, but they are reading  the same data and successfully interpret it.  That action indicates a lower abstraction layer, and that  is layer is layer 6.)

For the non technical, that means a JPG file from your home security camera also works in a web browser.  And an MPG from your iPhone can also play on your Android.  And a PDF can work across multiple devices.

Tsunami Wave
The tragedy of the 2011 Tsunami in Japan can be felt by billions of people due to the standard formats used to capture the event.

So the most important implication for freedom and democracy at Level 6 is a standard form of media across all devices and programs.

Not too long ago, you couldn’t watch a video from England (and most of Europe) on a device in the US.  They used two completely separate video standards.  You needed a different video tape, a different VCR, and a different TV (or “tele”).

During the Soviet Coup attempt, the coup leaders captured President Gorbachev and sent him away to “rest”.  While captive, Gorbachev’s son-in-law Anatoly secretly recorded 4 messages from him to the outside, and cut the physical video tape up so that it could be smuggled out.

Interestingly, I am unable to find any video from this tape online.  That may very well be due to the limited ability to encode the format of the tape, since the format of that tape was probably very….Soviet.  But you can see a video of Gorbachev describing his captivity by clicking here.

In today’s world, interoperability means you can watch a drone video from the battlefields of Ukraine, watch a debate in Australian Parliament, or see video directly from protestors.

In short, this layer is what has opened up the media to anyone with a phone and an internet connection.  The implication is as profound as the printing press on freedom and democracy.

Next Article:

The Federalist Number 1. Blogging Our Way to Modern Democracy

I read somewhere that the written word will be humanity’s only true form of time travel.  It is a method of communicating across the ages thoughts directly from one mind to another.  When you read a word, the writer reaches out across minutes, years, or eons and puts those thoughts directly into your head for examination.

Video, audio, and other means have a similar effect but there are so many competing factors.  The written word is the most direct method.

Is it any wonder, then, that God chose writing to convey His will across all these hundreds of years?  The uniqueness of this medium is manifest in the gravity of the phrase “the Word of God”.  Indeed, in John 1 God Himself is defined as “The Word”.

Gutenberg Bible
A 1455 Gutenberg Bible in the Library of Congress. While the age of the document itself is amazing, the fact that the words reach out through thousands of years is really hard to fathom.

Ok so shove me in the shallow waters here.  I’m only fixin’ to talk about the government.

It is a trip to think that the direction of government can be completely changed with words.  Pamphlets, newspapers, doorhangers, and facebook post can all convey thoughts to a critical mass of people that will change the course of history.  It’s why our First Amendment is so important.

The American Crisis
Pamphlets like “The American Crisis”, by Thomas Paine were critical in motivating people during the Revolutionary War.

There are many other examples of how this has happened throughout history, but I want to focus on a collection of moderately obscure works called the Federalist papers.

Most people are familiar with what the Federalist Papers are.  But it seems like very few people (including me) have actually dug into them to any degree.  This is understandable given the sheer volume and density of the material. But in a time when the validity of the US Constitution is questioned at the highest levels of government, I think it might be a good exercise to dig in to such a thorough effort to justify it’s adoption.

In a nutshell, the Federalist papers (simply labeled “Federalist No. #” where # is a Roman numeral) were a series of articles across several New York newspapers arguing in favor of a new Constitution vs. the old Articles of Confederation.

Articles of Confederation
Our original “Articles of Confederation”. Note the relatively boring lede: “To All to Whom”? Doesn’t really work on a bumper sticker.  It was worth scrapping the whole thing just to get the much better “We the People”.

Federalist #1 was published in “The Independent Journal” on October 27, 1787, and was written Alexander Hamilton.  One month earlier, the new Constitution had been proposed.

There really are some good tidbits in these documents.  You can read Federalist number 1 by clicking on this link.  

It has been frequently remarked that it seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.

-“Publius” in Federalist No. 1

Here are my summarizing points:

  • This nation is unique.  There are fundamental questions on how a country should function that are being addressed here and nowhere else.
  • We’re at a turning point here.  Either we update how our country is going to work, or things will descend into chaos.  It will affect all of humanity negatively.
  • That chaos will create (and is creating) power for some people, so they will oppose a new constitution.
  • They will try to hold on to this power by painting the new constitution as oppressive.
  • Some people are also planning to dissolve or split the union of states to create more power for themselves.
  • It takes a strong government to protect liberty.
  • I’m writing under a pseudonym so that the arguments will stand for themselves.
  • We’re going to go over the utility of a unified, federal government for your political prosperity.
  • We’re going to show how the existing Articles of Confederation aren’t good enough.
  • We will show how a new government as proposed in the Constitution is necessary to preserve our original ideas for a republic. We will list the reasons why it will do this.
  • We will compare it with the current state constitutions.
  • We will also show how a unified republic as defined in the Constitution is more secure.

So essentially Federalist #1 is an opening statement for the series.  It talks a little about why they are being written, what they hope to accomplish, and what points they are going to make.

It’s interesting to think that some of the most fundamental values and structures of our country were once open to such debate.  I’m looking forward to digging in further.

Self Driving Cars and the Need for Standard Roads

Like any child of the 80’s who’s into tech, I’m fascinated by the idea of self driving cars.  The only thing cooler would be flying cars, but it seems we’ll have to keep crawling before we can fly.

Thanks to Google and Tesla, self-driving automobiles are now a real possibility.  In fact, Tesla’s communication  and Musk’s relative record of success have made it more than a possibility.  It’s an expectation.  There is now a baked-in expectation that self-driving cars will revolutionize the world of transportation.

However, the reality is proving to be more difficult.  Delays and complications abound.  And predicting timelines has become foolhardy.

The obvious issue  is that driving is very, very complicated and unpredictable.  So much so that human minds get routinely confused.  It just makes sense that artificial minds will have the same issues.  It makes sense that this is a very difficult problem to solve and it will take awhile to do so.

But there may be ways to speed up the process.  And there may be tragic events that will suddenly slow down the process by many years or decades if we’re not smart about all this.  Let’s start with the latter.

Self driving sensorsLidar and Radar and Cameras, oh my! The complexity in feeding information to self-driving AI is very complicated. It should give us new appreciation for our own 5 senses. Source: Boston Consulting Group

Artificial Intelligence needs tons of data to learn.  This means that AI engines will have to spend huge amounts of time to get the tons of data needed to learn how to drive our roads.  I think we’re learning that our roads are more complicated and unpredictable than we thought.  Which means the AI behind autonomous driving will take more and more data.

Telsa uses “shadow mode testing”, in which the AI engine pretends to drive a car, and its decisions are tested against the actions of a real driver.  The large number of Telsa drivers helps in this regard.

But this illustrates the problem.  Artificial Intelligence and machine learning depend on mistakes.  The systems make mistakes and learn from them.  They makes an enormous amount of mistakes.   The more complex the environment, the more data you need.  And the more data you need, the more mistakes will be required to generate that data.

Yet driving is dangerous.  A mistake in driving can cost lives.  So the question quickly becomes “what is our tolerance for mistakes by self-driving cars?”  Are we willing to sacrifice lives so that cars can learn to drive themselves?

I think the answer is very likely to be “no”, and probably a more resounding “no” than we anticipate.  There have already been some episodes of loss-of-life related to autonomous cars.  And there have been odd attempts to cover up some close calls.  But the day we have a high profile event–a loss of a family of four, a school bus accident, an elderly veteran run over–public (and legislative) opinion will shift quickly against the current tech.

An episode like that will be tragic for the individuals involved, but it will also set the autonomous vehicle effort back for decades.  People are too important, and this tech has too much potential to let that happen.  So what can we do?

Tesla view of pedestrians
Tesla’s visualization of pedestrians. Super cool…but what if these simple icons represented someone you love? A spouse, grandparent, or child? Are we ready to trust tech to this? Src: DirtyTesla Youtube

When it comes to autonomous driving, all the attention is on the cars themselves.  That make sense given the ‘cool factor’ and the agency of the companies making the cars.  This is where the work is.

Hardly any attention is paid to the technology of roads themselves.  Even less attention is paid to the technology of planning, design, and construction of the roads.  It’s just accepted that the roads are what they are.

A huge part of advancing autonomous vehicles, I think, is to develop a set of standards and guidelines that will certify a road for autonomous cars.  Autonomous driving should require this certification.  It would include things such as:

  • Universal, standard lane markers, including curb and hash marks in turns
  • Assisting sensors in blind corners and unprotected turns
  • Redesign of crosswalks and bike lanes to protect pedestrians and bikers
  • Standardization of other vulnerable areas such as loading areas for passengers
  • Indicators of places where pedestrians and other vulnerable individuals are likely to be present.  “high caution” areas that will tell AI to enter a heightened state of precision and sensitivity.
  • Appending or tagging some of this information to the GPS standards

Federal and state highways would be pretty easy to outfit, as they already follow standard guidelines.  The obvious issue will be local and rural roads.

Google’s self-driving project addresses part of this situation by mapping every area’s detail ahead of time.  This approach has a similar effect, in that it ‘certifies’ every road by documenting its features ahead of time.  There are a couple problems with this, however.

Google NYC map
In essence Google’s Waymo ‘pre-certifies’ areas by training AI in the area and creating extensive maps.

First, it is a daunting task.  Even with the resources at Google’s disposal, it is nearly impossible to map every road.  Indeed, Google street view still misses huge chunks of coverage despite the significant effort to cover everything.   And you can’t underestimate the tendency in some places to consider mapping a privacy concern.

Second, streets change and those changes could have significant implications.  Using street view as a reference, it’s not uncommon to find places that haven’t been visited for many years…again despite a very comprehensive effort by Google.

Adding and adopting street standards and certification would help Google’s approach and speed up the process.

Retro Self Driving
Interestingly, many retro-fantastic illustrations imply standard highway markings for self driving cars. It’s fun to see just how close these visions are in other ways. (Gunther Radtke)

There are no guarantees in life.  Walking out the door has its own level of risk.  But when it comes to life-and death safety, we should mitigate these risks as much as practically possible.  When it comes to AI, autonomous driving, and self-driving cars, I think it’s obvious that a set of standards and a requirement for certification is required.  Moving in this direction now will allow us to leapfrog both delays in adoption, and tragedy in achieving adoption.

 

On a Return to Blogging

I was reading through my old blog archives the other day.  As I parsed through the slow slog of daily entries, I realized just how bad things have become over the last 10 or so years.  The internet used to be a positive place.  Full of interesting things and cool information.

Back then, it wasn’t about likes or re-tweets/shares/broadcasts.  It was just a fun way of putting some things out there that people might enjoy.  There was less concern about how many people were interested in what you said(although you knew a certain number were), and you really didn’t have all that much information on how many did anyway.

I’ve also looked at old social media archives from the 2010 era, and the realization was similar.

This used to be fun….

What in the world happened?

I know ‘blogging’ is dated and all this sure seems like a hackneyed good-ol-days rant.  But I think there’s more to it.

Let’s explore what happened.  And further, let’s NOT resolve to fixing it.  The internet is still an open and positive place if we’ll let it be.  But we can’t build on the things that took it awry.  We have to step back and look at the fundamentals.