Computing has changed everything. What next?

This post was originally published on this site

As I write this, my laptop has way too many open tabs. A Zoom meeting is about to start, and I’m getting pinged in the magazine production channel on Slack. The managing editor is asking if I can do a final approval on a news page. When done, I’ll either mark it as “clean” on a Google Sheet or dive into InCopy to generate a corrected pdf and save it to Dropbox.

While the paragraph above makes perfect sense to Present Day Me, the me of the past would have no idea what’s going on. Laptop? Is that some sort of clipboard?

In this issue, as part of our ongoing Century of Science project, we dig deep into how the extraordinary advances in computing over the last 100 years have transformed our lives, and we ponder implications for the future. Who gets to decide how much control algorithms have over our lives? Will artificial intelligence learn how to really think like humans? What would ethical AI look like? And can we keep the robots from killing us?

That last question may sound hypothetical, but it’s not. As freelance science and technology writer Matthew Hutson reports, lethal autonomous drones able to attack without human intervention already exist. And though killer drones may be the most dystopian vision of a future controlled by AI, software is already making decisions about our lives every day, from the advertisements we see on Facebook to influencing who gets denied parole from prison.

Even something as basic to human life as our social interactions can be used by AI to identify individuals within supposedly anonymized data, as staff writer Nikk Ogasa reports. Researchers taught an artificial neural network to identify patterns in the date, time, direction and duration of weekly mobile phone calls and texts in a large anonymized dataset. The AI was able to identify individuals by the patterns of their behavior and that of their contacts.

Innovations in computing have come with astonishing speed, and we humans have adapted almost as quickly. I remember being thrilled with my first laptop, my first flip phone, my first BlackBerry. As we’ve welcomed each new marvel into our lives, we’ve bent our behavior. While I delight at being able to FaceTime with my daughter while she’s away at college, I’m not so pleased to find myself reflexively reaching for the phone to … hmm, avoid finishing this column. I could download a productivity app that promises to train me to stay focused, but using the phone to avoid the phone seems both too silly and too sad.

Not enough computer scientists and engineers have training in the social implications of their technologies, Hutson writes, including training in ethics. More importantly, they’re not having enough conversations about how the algorithms they write could affect people’s lives in unexpected ways, before the next big innovation gets sent out into the world. As the technology gets ever more powerful, those conversations need to happen long before the circuit is built or the code is written. How else will the robots know when they’ve gone too far?