zlacker

[parent] [thread] 13 comments
1. golemo+(OP)[view] [source] 2016-12-05 20:17:40
I find it disturbing too. Right now one of the top stories on HN is about Amazon Go and the top comment is about whether the destruction of jobs it could cause is socially acceptable.

I don't know whether that is politics or not but I can't imagine discussing Amazon Go as a technology without having that discussion. In fact when you look at HN very little is about particular technologies. Most of our discussion is around the implications.

replies(4): >>mattne+71 >>michae+j3 >>karamb+M4 >>yagga+rR
2. mattne+71[view] [source] 2016-12-05 20:24:53
>>golemo+(OP)
I can't agree enough. I'm all for flagging uncivilized discussions, but preemptively censoring things that might turn into uncivilized discussions seems like throwing the baby out with the bathwater - we need to be able to talk about difficult things.
replies(1): >>mighty+Rh
3. michae+j3[view] [source] 2016-12-05 20:35:53
>>golemo+(OP)
I've been attending a lot of AI/Data Science conferences lately, and it's incredible to me how this is not a major part of any conversation about AI and automation.

A talk I recently attended by a data scientist from Amazon had him gloating about how many jobs he could eliminate.

Ironically, the only speaker who brought it up as a major social problem we'll have to tackle is someone from Uber. His solution was less than satisfactory, but at least he recognized the issue.

I don't want to pretend we live in a world of algorithms without consequence.

replies(2): >>karamb+47 >>CN7R+ge
4. karamb+M4[view] [source] 2016-12-05 20:43:23
>>golemo+(OP)
As technology inclined people, I consider it our duty to have that discussion.

We work on technologies that impact, in one way or another, other people's life.

As you correctly point out, the discussion of the social impacts of Amazon Go is currently open in another thread and I consider that a must.

Other example of the need of politics in here and in our heads when we design something is the case of Tristan Harris [0], as a former Google employee.

I am not saying I agree with Tristan Harris, or with one side or the other in the Amazon Go thread, but I consider HN as a place where civil political debate needs to take place, because we have a moral duty to have it.

We are, in a way, the 1% of "technologically aware people" (and probably among the world top 10% wealthiest...). We need to discuss these issues and we need to think before we act. I'm not trying to re-enact the 99% battle, but our privileges do come with a price and that price is thinking before we act...

I urge people on Amazon Go team to have that discussion. Do they consider working on that project socially acceptable for them or not, and why?

Do I consider, as a SaaS marketing provider, my job as socially acceptable, and why? That is something I, both as a citizen and a business owner, need to think about and openly discuss with my customers, shareholders and consumers/citizens if need be.

I will probably kick down an open door, but the etymology of politics is politika "affairs of the cities": aren't we all, as technology workers/operators/... all living in these cities?

[0]http://www.realclearlife.com/2016/10/27/former-google-produc...

◧◩
5. karamb+47[view] [source] [discussion] 2016-12-05 20:56:34
>>michae+j3
As a "data scientist" myself and more importantly as a human being, I witness the same behaviour almost every day and it baffles me.

Yes, what we do can have consequences. We need to think about that!

I have friends working for weapons manufacturers. They don't gloat about building stuff that can blow children up!!! Why the hell should we be absolved from any moral consequences for our acts?

I am not equating elimating jobs and kill children, but I would prefer if our industry abstained from any thought on the consequences of its trades.

I once had to choose between working for a weapons manufacturers for a very nice salary. I chose not to work for them. But I thoroughly thought about it and I don't blame my friends for making a different choice. I politically object to that choice, but it does not mean I am some sort of white knight...and it does not mean that sometimes in the future, if presented with another opportunity, I wouldn't make a different choice...

replies(1): >>bduers+Hg
◧◩
6. CN7R+ge[view] [source] [discussion] 2016-12-05 21:38:04
>>michae+j3
I think we're still in the early phase of A.I. where things seem more theoretical and thus ethics is not included in the discussion. However, as we near the time when policies will have large scale implications for our society, those consequences will be measured out. This is why I do not think A.I. will be a revolution but rather a gradual process. Already, the automation of cars is subjected to government regulation.
replies(2): >>michae+Ge >>karamb+9k
◧◩◪
7. michae+Ge[view] [source] [discussion] 2016-12-05 21:40:03
>>CN7R+ge
True, that said, it's much less theoretical to the people doing it than the average blue collar worker whose lives they're disrupting.

That's why I think the industry has a moral and practical responsibility to push society to properly prepare for the results. Because we understand the implications better than anyone.

◧◩◪
8. bduers+Hg[view] [source] [discussion] 2016-12-05 21:54:05
>>karamb+47
It's the gun-manufacturer/shooter dissociation.

Anecdotally, I had a neighbor who programmed the guidance systems for bombs, and the only reason I remember him is because immediately after introducing himself as such, he followed up with, "But I'm not the one who's dropping them. By making them smarter I can save lives".

I think that no matter how technically intelligent a field's operators are, they are still subject to the same dissociations as everyone else.

replies(1): >>karamb+zm
◧◩
9. mighty+Rh[view] [source] [discussion] 2016-12-05 22:02:49
>>mattne+71
We need to be able to talk about them, but it doesn't need to be here. There's value in places where you know you can come to discuss certain categories of things and avoid others. That is what's going on here.
replies(2): >>golemo+fi >>mattne+dC
◧◩◪
10. golemo+fi[view] [source] [discussion] 2016-12-05 22:06:23
>>mighty+Rh
It seems that because social implications of technology can lead to political discussion, it's out of bounds.
◧◩◪
11. karamb+9k[view] [source] [discussion] 2016-12-05 22:18:26
>>CN7R+ge
We have been working on AI in one form or another for 50 years or more, I think it ought to be time for a serious debate about this.

Lisp date from 1958 and some would argue that rule-based programming is AI. Eliza is also more than 5O years old.

The ethics of AI have been extensively discussed for a very long time.

In essence, the debate taking place around AI is a heir of the 19th debate on automated looms. Karel Čapek play, Robots, has been written in 1920 and it was already an ethical discussion of "autonomous machines"...

My first introduction to AI and its consequences and dilemna come from Isaac Asimov Foundation Cycle and that dates back to the 1950s.

AFAIK, the 3 Laws of Robotics invented by Asimov are actually used by philosophers & AI practitioners.

(I added and then removed references to the Golem, but...it could be argued as relevant to this discussion)

I am quite vehement in this discussion exactly because I am currently debating whethever or not I should release a new AI software I have designed. From a technical standpoint, I am quite proud of it, it is a nice piece of engineering. From a political standpoint, I feel that tool could be used for goals that I am not sure to agree with...

◧◩◪◨
12. karamb+zm[view] [source] [discussion] 2016-12-05 22:33:30
>>bduers+Hg
You are absolutely right and I can totally relate to both your experience and your neighbor's.

I don't program guidance systems for bombs, but I program marketing tools which are, in essence, tricking consumers into buying stuff. I dissociate myself with that issue by considering that any commercial relationship is based on tricking the other party into buying more stuff, but I would totally understand if someone objected that my software is not morally acceptable to them (and I would politely suggest that they go bother someone else :p ).

Further down the line, we could end up discussing if living in a society based on capitalism is "right" or "wrong". I would totally understand if people considered that as "not an HN worthy submission", but I think that inside a thread on the moral, philosophical and social consequences of AI, it could come up as a subject...and be down-voted if need be, not flagged as off-topic.

◧◩◪
13. mattne+dC[view] [source] [discussion] 2016-12-06 01:02:40
>>mighty+Rh
I'm not arguing for a free for all, I am arguing that the proposed ban on politics for a week is too broad a brush, because it catches several relevant conversations, and the serious offsenses are already against the rules.
14. yagga+rR[view] [source] 2016-12-06 04:49:25
>>golemo+(OP)
With automation, robotics and AI the world does not need so many bio-robots anymore. Before we needed dumb people to do manual labor. Not anymore. Machines can do it. Most of people don't want to learn and change. What to do with the bio-mass that can only eat, shit and have fun?
[go to top]