Donate

The views and opinions expressed in this blog are those of the authors and do not necessarily reflect the official policy or position of SnoQap, any other agency, organization, employer or company. Assumptions made in the analysis are not necessarily reflective of the position of any entity other than the author(s). These views are subject to change and revision.

How Much Do We Trust Our Technology?

How Much Do We Trust Our Technology?

We depend heavily on technology to get through any given day. Most are basic, trusted items, such as household appliances, watches, and cars. They are fixtures that have been around for decades and improved upon over time. However, in more recent years, we have come to rely on newer technology, like computers, GPS, and smartphones. While it is still possible to live without them, life is becoming more tailored to the presence of these devices, with most considered vital to thrive.  It is safe to say we generally trust them to assist us. However, as technology and artificial intelligence advance, what exactly will it mean to trust this kind of device?

Appliances and devices that cannot communicate back to us are not trusted in the same way as devices that can. We consider these non-smart devices reliable or unreliable based on how well we think they work, but trust in its most basic form is not necessarily part of the equation because of the device’s purpose. We do not have to wonder if our refrigerator is credible. Smart assistants, such as Siri and Alexa, are a bit different. How often do you stop and wonder how accurate their information is? Do you ever ask where on the internet they found it? Or do you simply trust that the robot has done its research? 

It is easy to trust devices we directly control and monitor. If you drive somewhere using a GPS and it tries to make you navigate toward a road that isn’t there, you know to just ignore its instructions and keep driving until it recalculates. However, if you were in a self-driving car and not watching the road, would you trust that car to get you to the right place safely? Our relationship with trust and technology can be similar to a baby monitor. A classic baby monitor system works like a walkie talkie. You directly monitor the situation in the nursery by having your own metaphorical ear in the room. Newer setups can even include cameras that allow you to visually check in from a device. What if the baby monitor employed newer technology? Let’s say there’s a sensor in the nursery and it sends you a text alert if it detects any sort of loud noise. Would you still trust it as much as a system you monitor directly? The answer may depend on the person. When it comes to important situations, do we trust our own instincts or a computer’s?

A recent study (Gombolay, Matthew, et al) tested trust in computerized decisions at a labor and delivery floor. A learning program generated recommendations for patient care, giving nurses the opportunity to accept or reject the advice. The information was given in two ways: a computer and a standing robot. Advice that was intended to be good was accepted about 90% of the time for both types. However, the study intentionally had the devices generate bad advice at times to ensure users were considering the decisions and not just blindly accepting the program’s advice. The results did show that people learn over time to exhibit the proper, healthy amount of skepticism. We should not accept robotic assistance without thinking, but we also should not openly distrust it by nature.

For the most part, robots are limited by the humans who create and use them. A robot knows what the programmer designs it to know. If the human lacks information, the robot will too. While some types of programs can learn using data sets and make decisions, they may not always be making informed decisions in the way we might think. This was seen in one machine learning protocol that was meant to determine whether a picture showed a wolf or a husky. A human could do this easily enough. After training the system, it was found that it was generally only right when the wolf or husky was in the environment it expected them to be in. It wasn’t actually looking for the features of a wolf to determine it was a wolf. It was looking for snow because in most of the training pictures, wolves were on snow and huskies were on grass. Machines can be incredibly intelligent, but they are only as intelligent as we can make them. And sometimes, in conditions like these, they end up being extremely unintelligent compared to a human performing the same task. It’s not an indictment of the technology; it just means we need to train them better.

Technology is going to keep developing and progressing into our lives. We know it has the capacity to make intelligent decisions in some cases. The question remains how to have reliable, high-quality solutions to a multitude of problems that computers could tackle. This introduces a two-part solution to keep in mind as technology evolves. The first part is that engineers, designers, programmers, etc. have to keep checking their systems as we develop them. In order for us to trust in technology, we need to ensure that we keep putting care and thought into what we create. If we as humans can determine well-informed, complex solutions, then a robot could do it too, given the information, processing ability, and opportunity. Developing that may one day make it possible for robots to efficiently solve problems that are incredibly difficult for humans to manage in a timely manner or even at all. Second, consumers need to continue to make intelligent use of products. We should not be suspicious of technology, but we should treat its recommendations with the same care and consideration we would give another person’s. As we get closer to a time when technology may be smart enough to become like a person unto itself, we need to be ready to handle that question of trust.

Works Cited
Gombolay, Matthew, et al. “Robotic Assistance in the Coordination of Patient Care.” The International Journal of Robotics Research, vol. 37, no. 10, 22 June 2018, pp. 1300–1316., doi:10.1177/0278364918778344.

The Importance of Data Literacy in a World of Fake News

The Importance of Data Literacy in a World of Fake News

Interest Rates Remain Constant… for now

Interest Rates Remain Constant… for now