Armed,Man,(attacker),Holds,Pistol,In,Public,Place.,Many,People
Shutterstock
Previous Post
Next Post

A couple weeks ago, reviews for a new wearable AI device started hitting the market. While the device reviews aren’t great, they do give us a hint at the future of wearable technology. If used right in the future, the technology could be used to make people, armed or unarmed, a lot safer from criminal attackers.

But, before we get to my ideas on the potential for this technology, let’s first share one of these reviews and discuss what the technology is supposed to do:

What It’s Supposed To Be, But Falls Short Of

In short, the Humane AI pin is something that you wear on the front of your shirt. But, instead of being a decorative thing like a brooch or a body camera, Humane’s device aims to be a lot like the latest Star Trek Combadges, giving you access to computing power, a translator, a projector display and more (sorry, no transporters though). Using chatbot technology, you can get the benefits of a computer without having to have a screen in front of your face.

But, to get all of this cool stuff, you’re talking about spending $700 and paying $24/month. So, it had better work well.

Sadly, it just doesn’t work well, yet. The hardware is great, packing a lot of computing power, battery, sensors and more in a small and elegant package. But, the software just doesn’t make effective use of it. It’s often very slow, and it often gets things wrong, just like a chatbot. Battery life is poor, and often unpredictable, making it hard to keep it running all day. It’s also shedding a lot of heat, making it uncomfortable to wear at times, and other times the device stops working due to overheating.

The video goes into depth about many other things it’s doing wrong, but suffice it to say, it seems like a very early take on wearable AI technology. There’s still a lot of work yet to do to deliver on the promise.

What We Might Look Forward To

When companies get this idea going, the end result will kind of be like an extension of the human brain and our normal social skills. Like our own brains, the device has sensor inputs that get processed to create outputs. Because we’re social animals, we are already accustomed to using other people, machines and even animals as extensions of ourselves. Instead of remembering some things, we know who or what to check with instead (a family member, a friend, a book or your phone for example).

Once the device is ready to join our social circle and contribute, it will make valuable contributions.

When it comes to depending on our social circle, the English language is full of expressions and idioms. There’s safety in numbers. Two heads are better than one. It takes a village. Many hands make light work. There’s strength in unity. Collaboration is key. Teamwork makes the dream work. We’re all in this together. Sí, se puede.

That last one was Spanish, but you get the point. Everyone sees the value that this device can bring to the table once it’s capable of acting like a person and joining the cybernetic collective.

Something To Watch Our Sixes

It’s always good to have someone watch your back. But, what if you didn’t need a fellow human to do that? The pin in the review obviously isn’t going to see behind you, but a pair of them probably could. Or, a future 360 camera accessory could provide your AI friend with the ability to see in all directions at once.

I’d really want such a device to use its additional cameras to look out for signs of trouble. A person advancing on me too quickly, a weapon, a facial expression indicating aggression and many other things could be watched out for by the AI system. I wouldn’t hand an AI a gun and let it make use of force decisions on my behalf (at least not for a decade or two), but getting a quick, “Watch out behind you!” or, “Duck!” would be useful, while letting me decide how to respond.

Nobody can pay 100% attention all the time. The human brain gets bored, tired and distracted. But, an AI system trained to look out for signs of trouble might be able to do that when we can’t. That would make such a device worth several times its weight in gold-pressed platinum.

Previous Post
Next Post

59 COMMENTS

  1. I didn’t check to see who wrote it before i clicked on the article. Got about halfway down and thought “this is a Jen article.” Checked at the end and it turned out I was correct. Keep it up please, I enjoy your work. As for AI… if i’m not going to trust a random human to do something correct, why would i assume a random programmer would get it right either? Self driving cars keep running over motorcyclists due to poor decision making. Programming is going to be the very week point for AI too.

  2. Good for today’s college students…
    Where am I little gizmo? You are face down on a sidewalk surrounded by hamas supporters fixing to beat you senseless, shall I send a meat wagon?

  3. “You took my sonar concept and applied it to every phone in the city. With half the city feeding you sonar, you can image all of Gotham. This is *wrong*.” –Lucius Fox

    Also:

    • 4 more years pause
      4 more years pause
      4 more years pause
      4 more years pause
      Joseph Robinett Biden is the Greatest President America has or ever will have.
      4 more years pause
      Forever and ever Amen.

  4. Sure wear yer AI shirt. Tattoo 666 on your head. Anything that can be hacked will be. My non-AI gats will work in an emp attack🙄

    • ” .giv already has enough spy devices.”

      I can see the “Identify what I am seeing” function as being extremely useful for a vision-impaired individual navigating themselves in a city, for example.

      But that’s about it.

      Aren’t Marsupials kinda blind anyways? ;0

  5. Not so sure this is a good idea, even if all the bugs get worked out. It could backfire on the wearer especially if you are carrying your EDC, and some other anti-gun wearers’ gizmo alerts on you or it automatically calls a SWAT team. It’s hard enough being discreet as it is now, without adding this kind of ubiquitous surveillance.

    • Not sure it’s a good idea?
      What, you’ve got a problem with a device sending what’s in your living room to the cloud?
      Nothing to hide nothing to fear.

    • People will be begging for technology like this, younger people too lazy to turn on their own lights or adjust their thermostats, unable to formulate a cogent theory/question who must turn to Alexa or Siri. Yes, give your self control up to a machine, one that is connected to big brother, big brother loves us…suckers.
      Government is always involved when it comes down to CONTROL.

  6. Utility devices which use Artificial Intelligence could be enormous helpful–and enormously detrimental when bad actors use them against us.

    My personal opinion: the potential negative outcomes due to abuse outweigh the potential benefits. Alas, the dreamers and Pollynannas of the world will ultimately whip-up the masses to demand it and Artificial Intelligence powered devices will dominate our lives.

        • Government is like cancer. You don’t cute cancer by cutting half of it out or just stopping it from growing as fast.

          You must cut it all out with a healthy margin of unaffected flesh around it too. Then you need to blast the area around that with radiation and poisonous chemicals just to be sure you got every trace of it.

    • Or end our lives.

      Imagine if AI was actually benevolent, didn’t get hacked and was in charge of everything while being 1,000,000x better working than the TTAG comment section.

      We’d be wiped out inside a year. This place has been around for years and years and it’s run about as well as some short-bus 3rd graders would run a nuclear plant.

      • strych9,

        This place has been around for years and years and it’s run about as well as some short-bus 3rd graders would run a nuclear plant.

        Ouch!

        Not to put too fine a point on it:

  7. No such thing as “edge” in selfdefense. Selfdefense is where the antagonist moves first – ironically, that is “white”, in the world of chess. In the US, moving first makes you the black hat.

    Even if the person “advancing on you rapidly” from behind has a face you find threatening. Or is “looking at your butt”.

    • “I’d like to spit some Beech-Nut in deb’s eye, and shoot her with my ‘ole .45…”

  8. The way AI is being written now all the white men in range will be flagged a threat and the black guy sucker punching you from behind wont trigger any alert at all.

    I’m curious as to who will be the ultimate responsible party for all this stuff should any lawsuits go flying. You? The manufacturer of the device? The programmer of the AI? When a self driving car plows through a crowd who will pay? When your alert gadget misses a threat or sees a threat when there is none who will pay? When the bot you bought to maintain your social media makes promises, threats or deals and your company takes a hit who pays?

    The litigious future looks pretty absurd.

    • Given the way that things tend to work, the owner [end user] will be the one who pays initially because that’s the way “attractive nuisances” tend to be litigated and I imagine that would be the initial way they’d go about this.

      Realistically, they wouldn’t be suing the individual owners but their insurance companies by proxy.

      Over time enough of them may get screwed to form a class-action against the manufacturer. You could guess that there might be one for the injured and one for the owners depending on how far things go.

      IMHO, the entire idea of outsourcing our thinking to such a device is 1. insanely stupid and 2. a manifestation of our laziness as a species.

      But then, when one looks at the past 40-50 years one could easily come to the conclusion that for a long time we have allowed credentialism to stand in for actual merit and, in doing so, have created a society that’s mostly “managed” by midwits. Meanwhile actual, useful people have been sidelined or just fucked off to go do something else.

      Congress isn’t a terrible example.

      The rise in the average age of Reps/Senators is what it is because there’s a derth of people from GenX downward who just look at politics and say “Uh, no”. This has been a trend since the 1980’s.

      Younger people simply don’t run for Congress in the numbers that they did 40+ years ago. Mostly, that’s probably due to not wanting to deal with the way they’ll be attacked in the media or by other pols, in ways which are most uncivil. And for which, at this point, there is essentially no recourse. When Schumer slanders you from a podium at a presser it’s not *acceptable* to run up and drop hammers on his face until the smell of Polydent clears out the room. On the one hand, maybe that’s a good thing. OTOH, that’s why he has no problem engaging in slander. And this isn’t just true of Schumer, many pols engage in this kind of behavior. Interestingly, the R’s mostly do it to each other, but that’s another story.

      That leaves the old-time players mostly undisturbed with no roll-over. The result is predictable from other systems: A combination of stagnation and corruption which is how you’ve gotten a geriatric kleptocracy *running* things and entirely disconnected from the population and modern reality.

      The same thing is true of university faculty in many locations and management of many companies. Even where the CEOs are not old they are not being replaced based on merit in most cases and, seeing this, those who do actually merit the position don’t bother to even throw their hat in the ring. Statistically, the vast majority <55 are checked out in this regard.

  9. Watch the complaining when, like Jessie Jackson, the AI notices that it is more dangerous to have a black man following you down the street than a White man.

  10. Will this product play well with my Dnsys Exoskeleton and Invisibility.com invisibility gizmo? Now all I need is a set of really good shades…

  11. AI has a place in science and engineering. It can do things faster than any human. It may lead the way to sustainable nuclear fusion. But it is extremely dangerous in social situations. Yes, ubiquitous surveillance, from your face to all of your information, all at the push of a button. To make matters worse, it is known to plagiarize and to lie. They are offering it as a research tool in my profession, but I will not touch it. I’ve been doing what I do for over 30 years and I don’t need the help–and some who have used it and had it create false information have paid a large professional consequence. I can’t imagine how much harm it can do through social media.

  12. I am a type one diabetic. I have a Glucose Monitoring Device attached to my body. It tells me when my blood sugar is high or too low with different beeps. It is not perfect, but it sure helps me a lot to control my blood sugar. I could see a device like the author envisions helping us with lots of things in the future. We get Amber Alerts and weather warnings on our smart phones now.

    • Using a Continuous Glucose Monitor (CGM) to give you data to make decisions is entirely different than offloading all, or part of, your thinking to the gadget.

      You’d think that after Dexcom’s little misadventure in 2019 this would be obvious.

      For those who don’t get the reference: In 2019 Dexcom was preparing a firmware update for the interface between their G6 CGM and their insulin pumps capable of Bluetooth connection to the CGM.

      The idea behind this design, originally, was that a user would use the CGM to make decisions about meals but when they were not eating the pump would hold them steady within an acceptable window because it knew their glucose readings from the CGM. It therefore knew when to give insulin and when not to, mostly.

      The system actually worked reasonably well. Until the obvious flaw was found…

      This update was delayed and then canceled because Dexcom, in their very last QC check on the code, realized that someone had inserted malicious code into the update that allowed outside influence over the insulin pump linked to the sensor. This inserted code allowed actual control of the pump to be taken over by an outside actor while leaving all user interfaces to function independently.

      In a nutshell, someone could access the Bluetooth backdoor and order someone’s pump to give them a lethal overdose or simply stop giving them insulin at all. They could order it to follow instructions but modify them, you punch in that you want 2u and it gives you a fraction or a multiple of what you asked for.

      At the same time, the app on phone wouldn’t show the discrepancy nor would the screen on the pump. So, someone could walk into an endocrinology clinic and remotely poison a goodly fraction of the waiting room, if they liked. Or send them all into an unexplainable case of DKA.

      Ultimately, Dexcom’s investigation turned up that the malicious code had been inserted by an outside entity (not someone at the company). IOW, they got hacked and someone put this code into their update as it being built.

      Again, offloading your thinking to something other people control is probably unwise. It’s like trusting the TV. That’s how you end up driving to a large parking lot and waiting an hour for a stranger to inject you with what, to you and them, is mystery juice. Mystery juice that anyone who passed into MCDB would know to run like hell from if the story on TV even remotely close to reality because there is no fucking way that could be made safe with the current tech. Even without the lies and coverups this shit was like playing several trillion rounds of Russian Roulette all at once.

  13. If you think about the way this currently works the idea of using it this way is questionable.

    First, LLMs/AI work by statistical comparison to known datasets derived from the internet. This is why they often end up being hyperliberal.

    So, your AI doesn’t want to be all racist or colonial or oppressive and shit. Does that seem like it’s going to make unbiased decisions on your behalf? Or is it going to incorporate Lefty views as to the age, ethnicity and socioeconomic status of that guy with a knife?

    Secondly, because of statistical associations, even if the thing isn’t biased, it’s not terribly difficult to defeat because people will adapt to it’s recognition system faster than the AI can adapt to changing behaviors. Simply incorporating random behaviors into approaching a target will flummox such a thing.

    Third, you cannot trust it’s judgement regardless of a lack of bias. Training/testing of AI drones has shown this. In simulation; when given a mission the AI sent drones which then attacked in ways that resulted in what the operators believed were unacceptable civilian losses and attacks on vital infrastructure.

    When they attempted to stop the drones from doing this the drones simply went back, blew up the operators, then went and did the *bad things* and then attacked the target.

    Which is to say that their behavior is, at best, unpredictable. Look at Google’s AI or DALLE-E2 which both invented their own languages and started doing *inexplicable* things. Devs couldn’t even begin to figure it out because, again, it’s all being done in an AI language.

    Forth, at base, what computers do better than people are loops. Computers can accomplish loops extremely fast. You want to repetitively look for a binding domain as a drug target? Computers will find them pretty fast and whittle down massive lists of compounds to small test batches really fast.

    What computers suck at is logical jumps. This is related to point #1. Novel situations will cause this thing to make errors unless it’s a truly well done AGI, which, honestly, if people build that, we’re fuckin’ nuts and we deserve the disasters it will cause. And we can be fairly well assured that this will happen because 1. we don’t understand these things’ behavior at all as it is and 2. everyone involved in this already threw out Asimov’s rules.

    In conclusion, not just no. Fuck no.

  14. So there I was, parked between a rock and hard place.
    I had sent my man wife into the store to get groceries so I could stay out in the car playing with the hologram on my hand when all of a sudden some guy, might have been a girl, or a transformer, went running by screaming “Ah a snack bar! ! !” and blew up the grocery store.
    I thought, “Now damn it, what am I going to make for supper?”
    I ask AI and it was giving me some suggestions when chunks of humans started raining down, *Splat* Harryann’s head came through the window, Cool. Nothing like a little head for supper.

    • Theres a commercial on TV, you might have seen it, this guy says he was thinking of committing suicide so he locks his gunm and bullets up.
      Like, Duh.

      • The government should force everyone to avoid certain areas so I won’t be tempted to relapse. Alternatively, they could provide me with the tools I need to be an addict forever. Do they love me or hate me?

        • Did you vote for The Greatest President America has ever had or ever will have, Joseph Robinett Biden?
          If so the government loves you.

          • I thought they did when they gifted me my free crack kit, but I wasn’t sure. Are they helping me have safe fun for a few minutes followed by misery, or are they helping me die?

      • I think you mean a military veteran has a conflict in his life, and he is, of course, a trained killer. Beware the veterans ,especially veterans of the many conflicts politicians involve us in to enhance their own finances. Now that they have done the bidding of politicians and returned broken and forgotten by their country they are a threat to society.

    • AOC would be delighted to learn that I only use depleted Uranium ammunition rather than lead when I go hunting. I have quite a surplus from the gas centrifuge cascade that I employ to extract fissile Uranium-235 for my thermonuclear hand grenades. AOC would of course be relieved that I don’t use said thermonuclear hand grenades for hunting animals. I reserve them for defending myself from her constituents.

      • Do you have trouble getting enough distance throwing those grenades? Or do you use a grenade launcher? I hear the killzone on those is pretty big.

  15. “That would make such a device worth several times its weight in gold-pressed platinum”

    That would make such a device worth several times its weight in gold-pressed platinum to governement tracking your every move.

    FIFY

    you think government would not?

  16. Oooh Oooh, do I get to wear a hat with one of those spinny things on top? You know, like them Whammo cars got? I’m in!

  17. After Gun Control is through murdering defenseless people by the millions it turns on the Gun Controllers. And once Gun Control digests its prey it begins another hunt…or call it the History of Gun Control on the road to repeating itself…

  18. After reading this article, I would say that this “invention” is NOT read for “prime time”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here