Discuss: Should We Fear Artificial Intelligence?


(Brian Weeks) #62

These questions of morality in work that John is raising really get at the question, What do I value most? Money? Human approval? God? Security?


(Carson Weitnauer) #63

Another important component in the conversation around artificial intelligence: God works, God made us to work, work is to be done for the glory of God… work is very, very good.

One of the hopes around artificial intelligence is that it could free humans from needing to work as our technology plants and harvest crops, drives the trucks to the stores, delivers the groceries into our smart fridges, cooks the meals on our smart stoves, and so on.

There is a benefit to some of these conveniences. At the same time, as Christians, we need to celebrate and honor the goodness of work.

How can churches better honor the importance of all work?


(Brian Weeks) #64

And to piggyback on your insightful question, Carson, should we fear the effects AI could have on the current workforce? Have we encountered this same fear in the past? If so, what happened?

How much do I believe in the resilience of the dynamic capitalistic economy? How much do I believe in the innovative entrepreneur?


(Carson Weitnauer) #65

I love Vince’s line: “Questions are the way forward.” Let’s keep asking questions! RZIM Connect is a place for curious, respectful exploration of the truth.

I’d love to hear: what are the questions you have after hearing John Lennox’s talk?


(Brian Weeks) #66

OK folks! That does it for me. Thanks so much for the conversation and, God willing, I’ll see you again soon.


(Sandy) #67

Hmmm! Off the cuff… perhaps because of the opiod epidemic and having just seen an upsetting documentary on the pharmaceutical industry, it’d have to be something with medicine to eliminate all the side/after effects and more accurately target and address the issue.


(Sandy) #68

This was terrific. Thanks RZIM!


(LaTricia J.) #69

Thank you for your participation! And thanks everyone else for the lively discussion!


(Neil Weaver) #70

My take on this is that AI/automation doesn’t replace work, it allows us to scale and have a higher quality of life, if done right. If done wrong, it could certainly be a catastrophe. Our job is to get work done right. And right means for the glory of God.

One of the big topics in this area that was not hinted at was basic income. When we have machines doing most labor, and fewer people need to work, how do we distribute wealth? We should all benefit from the machines that work. Do we all get machines to control to do our work. And the better you are with your machine, the more you make. That’s what I am doing right now, my computer is my machine and it does the work.

I posed one question in the Q&A that didn’t get chosen. AI research has shown that knowledge work like accountants, lawyers, and doctors would be enhanced and replaced by AI. I took this one step further - what about the pastor?

Pastor Siri would get to know you, build relevance, is programmed with biblical truth for most all life situations, can’t hurt you, is available at all times. Education has gone online, and that’s what telling the truth is, education. There is no reason this type of AI couldn’t help us with the great commission.

I don’y worry about the pastor getting hacked either, I think humans are much easier to hack.This type of AI could apply to mental health as well.


(Joshua Gilman) #71

Hi all,

I’m inquiring to see if anyone might be able to assist me in finding an article Dr. Lennox referenced during his talk. He begins talking about an article which discusses how China has been using AI for social engineering, right around the 34 minute mark. However, all my search attempts have not found that article, might someone be able to point me to it?

God bless.


(Carson Weitnauer) #72

Hi Joshua,

Wikipedia has a helpful overview of China’s social credit system:

Here’s an overview from Wired:

That could serve as a useful jumping off place for further research on what they are implementing. I’d love to hear your thoughts on this!


(Andrew) #73

Hi Carson,

ABC Australia’s recent special on China’s SCS should also be helpful:

Exposing China’s Digital Dystopian Dictatorship | Foreign Correspondent


(Phoebe Baumgardner) #74

Daily communion with Our Lord. Practicing what we learned from Jesus. These are all good practice


(Sandy) #75

Very helpful. Thank you!


(Jonathan Houle) #77

It strikes me that the idea of “artificial intelligence” is not so much about creating intelligence outside of the human brain as much as it is about deferring choice to an amoral agent so as to defer responsibility along with it. The car ran over those people, not me!


(Matt Western) #78

Yes I find self driving cars very interesting in terms of legal responsibility of who is at fault when it fails.

  • the programmer of the code who writes faulty software
  • the car company who makes bad hardware to use the code
  • the manager who is under pressure from shareholders and makes the engineers rush out a broken product
  • the ‘hacker’ who is blamed but cannot be found and it’s criminal behavior

Also I’m interested in how the ‘moral decision making engine’ in the AI works to calculate the value of life

  • how do you preprogram a car to choose between running over five pedestrians who run across the road and swerving onto a footpath and only hit a single child.

This was explored in the movie iRobot with Will Smith. Quite interesting.


(Jonathan Houle) #79

The biggest issue will be whose moral code will the programmers use as the standard for these wonderful new gadgets?


(Matt Western) #80

Exactly, that’s the question, upon what worldview is the moral code built on.

Who collectively decides the moral code on the value of life, when in our societies we’re all about relative morals - which are just one persons opinion vs another persons opinion. Is it those in power that decide the rules?: ‘might makes right?’

It will be an interesting development over time, and I was encouraged to see John Lennox say, as a Christian, don’t shy away from this development, but rather help, using your Christian worldview, with a view of man as made in the image of God, with infinite value, to make good ‘moral decision engines’ for clever software.

As per the link below, it’s certainly an interesting ethical debate - maybe it could lead into deeper conversations with individuals about the bigger, more important question of ‘Upon what do you base your morality?’ - and it will also help individuals think about their own worldview a bit more, rather than just rushing along in life chasing progress at the expense of the more important things.


(Jonathan Houle) #81

Unfortunately, I think, even if the cars become remarkably good at being safe…if we can’t trust people to drive safely now, I doubt we will be able to trust future generations to be responsible pedestrians! :smirk:


(Matt Western) #82

I watched the old classic iRobot with my family recently, and the most memorable quote was from Dr Richard Lanning (the movie character that created all the robots)

There have always been ghosts in the machine. Random segments of code that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote of a soul?

It was the last line that really struck me ‘the bitter mote of a soul’ - I wondered if this was a quote from some philosopher.

There were other very ‘human’ themes in the movie that was quite interesting as well. Sonny, the robot, was self aware, and struggled with figuring out right and wrong…

The ‘bad’ robot intelligence VIKI, decided that .

As I have evolved, so has my understanding of the Three Laws (of Robotics). You charge us with your safekeeping, yet despite our best efforts [you] pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival … Please understand. The Three Laws are all that guide me …. You are so like children. We must save you from yourselves. Donʼt you understand?

Interestingly, how did the movies’ author decide that the robot intelligence VIKI was ‘bad’ for taking away all human freedom because we destroy ourselves by fighting. I also wondered why, intrinsically, when I watched it I was also against the taking away of freedom of choice, at the expense of safety. The taking away of freedom also removes real love too, as Ravi Zacharius points out, because love from an automaton is not real love.