Digital Assistants, Artificial Intelligence and the Blurred Lines of Intervention | Education

0

OSWEGO – How Alexa, Siri and artificial intelligence (AI) impact and intervene in dangerous situations of daily life? It’s an ever-evolving question that SUNY Oswego Communication Studies faculty member Jason Zenor continues to explore, including in an award-winning publication.

In “If You See Something, Say Something: Can Artificial Intelligence Have a Duty to Report Unsafe Behavior in the Home,” published in the Denver Law Review, Zenor recounted a 2017 incident in which police reported that a jealous man was unknowingly threatening his girlfriend at gunpoint. Alexa from their Amazon Echo to call the police, which led to his arrest.

Although the incident made national news — in part because of its relative rarity — Zenor noted that it represents the tip of an iceberg of how AI is evolving to interact with daily activity in line.

“You can find a few dozen stories over the past few years where Siri or Alexa saves a life, like crimes, accidents, heart attacks, or whatever,” Zenor explained. “In these situations, the victim has their phone or home device set to recognize ‘Call 911′ or ’emergency.’ This is a simple setting and most are now set to do this automatically.”

Zenor’s post, recognized as one of the top articles in the National Communication Association Conference’s 2021 Freedom of Speech Division, explored the trend in more detail, and its research found that smartphones and home devices are not not capable of enabling anything beyond direct requests to call 911. But artificial intelligence is at work behind the scenes in other situations.

“Facebook and other tech companies can monitor things like bullying, hate speech and suicidal tendencies in online spaces through AI,” Zenor noted. “But it still searches for certain words and will respond with pre-approved responses like hotline numbers. Home AI and other devices are set to listen when we want, but they still need some prompts even as language ability improves.

AI is not yet making a big difference in home security – other than home audio and video, as afterthought evidence – due to the complicated nature of the action, when “in fact, it’s more likely right now that perpetrators will use apps to track and monitor their victims rather than an AI helping a victim, but certainly not proactively,” Zenor noted. But the field is progressing elsewhere.

“Outside the home, predictive AI is being used in both healthcare and law enforcement,” Zenor “It’s admirable in healthcare and similar to the screenings that healthcare facilities are now giving patients – like depression, addiction or home safety But with those two spheres it’s only predictive and we also run into issues of implicit biases programmed into the AI ​​leading to treatment disparate based on race, sexuality, income and other factors, and it is already happening in the justice system.Each time someone is reported, it can lead to unnecessary involvement with law enforcement or mental health systems, which changes the trajectory of a person’s life. This can have serious consequences.”

Along the same lines, these questions also consider legal issues such as privacy, criminal procedure, duty to report and liability.

“The first question that will need to be answered is what is the ‘status’ of our AI companions,” Zenor explained. “Courts are slowly granting greater privacy protections to our connected devices. Law enforcement can no longer simply request the data from tech companies. But if AI becomes more anthropomorphic and less technological, then the question is what is the legal parallel? Will law enforcement seize our possessions – as they do with phones and files – or will in-home AI be more like a neighbor or family member who tell us? The first invokes the Fourth Amendment, the second does not, because committing a crime or harm is not protected by general privacy laws.

The other side of the coin involves proactive reporting duties. “Generally, people have no obligation to report,” Zenor said. “The exception is for certain relationships – such as teachers, doctors or parents – who would have a duty to report possible harm in relation to those to whom they have a responsibility, such as students, patients or children.”

Liability issues could further complicate the situation and lead to unexpected lawsuits for companies using AI.

“Once you act, you have a duty of care,” Zenor said. “If you don’t exercise care and it results in an injury, there could be liability. So companies can expose themselves to liability if they program AI to be able to respond and it goes wrong. Conversely, if companies could program AI to do this and choose not to, then there would certainly be some public relations issues at a minimum, but I could see this escalating into class action negligence when deaths occur.

Like many issues related to the evolution of technology, individuals and society must consider trade-offs.

“Ultimately, we have to consider how much additional encroachment on our privacy we’re willing to accept in exchange for our protection from harm,” Zenor noted. “This is not a new issue – it arises every time we have a technological advancement. Ultimately, privacy is a social construct in law — what we, as a society, view as We seem to be getting more comfortable with the passage of time and tech natives see no problem while older generations see it as a clear violation.

As for the future, how and how often will AI step in while trying to provide help?

“My best guess is that there will be newsworthy incidents where AI saves a life and there will be public pressure to add more safety features to the technology,” Zenor said. “AI will progress enough for machines to become companions like our pets so that we have a relationship with them that includes disclosing private information that it might keep permanently. As things stand, we would expect that if our mate could save us that he would try to do so – many people own pets as a form of protection or as a service animal. Companies will seek liability protections either through waivers in terms of agreement or through special legislation similar to “good Samaritan” laws.

As an Amazon Associate, I earn from qualifying purchases.


Source link

Share.

Comments are closed.