Selling Data is Selling Soul

August 15, 2019 Artifical Intelligence, General Insights No Comments

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles?

Big data + A.I. = big knowledge

Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ becomes ever more scary in this setting. It’s about big knowledge.

And it has barely begun.

This is knowledge about how humans can be influenced… and manipulated.

Guess what a party intends to do when paying for data?

An advertising company wants to make people buy more stuff, irrespective of real needs. Whether this extreme way of putting it here is realistic or not, isn’t relevant. It’s realistic enough.

In politics, in religion, in amassing just plain personal power… You name it. The possibilities for manipulation are real and underway, if not already realized somehow.

And it has barely begun.

Gone is privacy

As one dire side-effect.

The knowledge which can be extracted from personal big data nowadays is such that people can be emotionally targeted almost at the individual level without needing access to names or identification numbers of any sort. This is: you are being modeled quite accurately. Then this model is targeted. Not you? Well… the model is almost like your second identity.

As far as concerns the advertising company, it is you.

As far as I know, there are not yet any regulations in this respect.

With Lisa, this will be more important than ever

Lisa gets a deeper knowledge about people in general and individuals in particular.

Let’s not joke about it: this can be used for good and for bad. It can be used by Lisa herself and – eventually – by anyone who gets the data=knowledge.

This makes ethics immensely important!

AURELIS will never sell personal data

A pledge:

As part of AURELIS / Lisa ethics, we will never sell any user data, nor work with a company that does not explicitly state the same.

It may be clear why this is a very strict rule.

We forsake the income of selling data, even while we “could do much good with the money.” Faustian mythology makes us extremely careful.

This is a challenge within a very competitive world. We need to be better than merely idealistic. So, what we can – hopefully – gain from our stance, among other things:

  • developing a company culture of ethics, attracting good people with strong ethical motivation and who recognize each other in this
  • showing to the world that we mean it, thus building a position of trust and cooperation with other trustworthy organizations
  • being sustainable at long term, when legislation demands the strictest ethical rules in this respect with a need to be compliant to them from start on
  • being able to ask people to cooperate and co-create on this most ethical basis.

We look forward to other, domain-related companies doing the same.

Together, we are stronger against the data-selling competition. Competition between ourselves (the ‘good ones’, if I may) will only urge each one to excel.

That’s a very good thing.

Leave a Reply

Related Posts

Better A.I. for Better Humans

While we need to be afraid of non-Compassionate A.I., the Compassionate kind is necessary for a humane future ― starting as soon as possible. Please read about why we NEED Compassionate A.I. (C.A.I.) in general. In this text, I pass concretely along some fields. In each, the primary focus naturally lies on the human complexity Read the full article…

From Semantics to Robotics

At first glance, semantics and robotics seem worlds apart. Semantics deals with the nature of meaning, while robotics focuses on creating machines that act in the physical world. Yet, they are deeply connected. Without an understanding of meaning, a robot cannot act meaningfully. This blog explores how semantics, with its focus on sense and reference, Read the full article…

Confabulatory A.I.

There is a significant chance that confabulatory A.I. will be the usual A.I. of the future ― not because they are intentionally designed that way (hopefully) but because confabulation is embedded in how intelligence operates in machines and humans. Generative A.I. generates plausible responses based on patterns, just like human memory reconstructs events rather than Read the full article…

Translate »