Selling Data is Selling Soul

August 15, 2019 Artifical Intelligence, General Insights No Comments

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles?

Big data + A.I. = big knowledge

Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ becomes ever more scary in this setting. It’s about big knowledge.

And it has barely begun.

This is knowledge about how humans can be influenced… and manipulated.

Guess what a party intends to do when paying for data?

An advertising company wants to make people buy more stuff, irrespective of real needs. Whether this extreme way of putting it here is realistic or not, isn’t relevant. It’s realistic enough.

In politics, in religion, in amassing just plain personal power… You name it. The possibilities for manipulation are real and underway, if not already realized somehow.

And it has barely begun.

Gone is privacy

As one dire side-effect.

The knowledge which can be extracted from personal big data nowadays is such that people can be emotionally targeted almost at the individual level without needing access to names or identification numbers of any sort. This is: you are being modeled quite accurately. Then this model is targeted. Not you? Well… the model is almost like your second identity.

As far as concerns the advertising company, it is you.

As far as I know, there are not yet any regulations in this respect.

With Lisa, this will be more important than ever

Lisa gets a deeper knowledge about people in general and individuals in particular.

Let’s not joke about it: this can be used for good and for bad. It can be used by Lisa herself and – eventually – by anyone who gets the data=knowledge.

This makes ethics immensely important!

AURELIS will never sell personal data

A pledge:

As part of AURELIS / Lisa ethics, we will never sell any user data, nor work with a company that does not explicitly state the same.

It may be clear why this is a very strict rule.

We forsake the income of selling data, even while we “could do much good with the money.” Faustian mythology makes us extremely careful.

This is a challenge within a very competitive world. We need to be better than merely idealistic. So, what we can – hopefully – gain from our stance, among other things:

  • developing a company culture of ethics, attracting good people with strong ethical motivation and who recognize each other in this
  • showing to the world that we mean it, thus building a position of trust and cooperation with other trustworthy organizations
  • being sustainable at long term, when legislation demands the strictest ethical rules in this respect with a need to be compliant to them from start on
  • being able to ask people to cooperate and co-create on this most ethical basis.

We look forward to other, domain-related companies doing the same.

Together, we are stronger against the data-selling competition. Competition between ourselves (the ‘good ones’, if I may) will only urge each one to excel.

That’s a very good thing.

Leave a Reply

Related Posts

What Makes Lisa Compassionate?

There are two sides to this: the ethical and the technological. Lisa is an A.I.-driven coaching chat-bot. For more: [see: “Lisa“]. Compassionate Artificial Intelligence In my book The Journey Towards Compassionate A.I. : Who We Are – What A.I. Can Become – Why It Matters, I go deeply into the concepts of Information, Intelligence, Consciousness, Read the full article…

Will A.I. Soon be Smarter than Us?

This text may be interesting to many because these ideas may shape the future of those many to the highest degree. It’s smart to see why something else will be even smarter. Soon? Soon enough. The ongoing evolution toward the title’s state will not be evident. In retrospect, it will be an amazingly rash evolution. Read the full article…

A.I. to Benefit Humans

‘Human-oriented’ is not the same as ‘ego-oriented.’ As never before, and perhaps never after, we have with A.I. a powerful toolbox that can be used in any direction. In-depth As to AURELIS ethics, the striving – of A.I. and of any other development – should definitely be towards humanity-in-depth, the ‘total human being,’ as opposed to Read the full article…

Translate »