Selling Data is Selling Soul

August 15, 2019 Artifical Intelligence, General Insights No Comments

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles?

Big data + A.I. = big knowledge

Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ becomes ever more scary in this setting. It’s about big knowledge.

And it has barely begun.

This is knowledge about how humans can be influenced… and manipulated.

Guess what a party intends to do when paying for data?

An advertising company wants to make people buy more stuff, irrespective of real needs. Whether this extreme way of putting it here is realistic or not, isn’t relevant. It’s realistic enough.

In politics, in religion, in amassing just plain personal power… You name it. The possibilities for manipulation are real and underway, if not already realized somehow.

And it has barely begun.

Gone is privacy

As one dire side-effect.

The knowledge which can be extracted from personal big data nowadays is such that people can be emotionally targeted almost at the individual level without needing access to names or identification numbers of any sort. This is: you are being modeled quite accurately. Then this model is targeted. Not you? Well… the model is almost like your second identity.

As far as concerns the advertising company, it is you.

As far as I know, there are not yet any regulations in this respect.

With Lisa, this will be more important than ever

Lisa gets a deeper knowledge about people in general and individuals in particular.

Let’s not joke about it: this can be used for good and for bad. It can be used by Lisa herself and – eventually – by anyone who gets the data=knowledge.

This makes ethics immensely important!

AURELIS will never sell personal data

A pledge:

As part of AURELIS / Lisa ethics, we will never sell any user data, nor work with a company that does not explicitly state the same.

It may be clear why this is a very strict rule.

We forsake the income of selling data, even while we “could do much good with the money.” Faustian mythology makes us extremely careful.

This is a challenge within a very competitive world. We need to be better than merely idealistic. So, what we can – hopefully – gain from our stance, among other things:

  • developing a company culture of ethics, attracting good people with strong ethical motivation and who recognize each other in this
  • showing to the world that we mean it, thus building a position of trust and cooperation with other trustworthy organizations
  • being sustainable at long term, when legislation demands the strictest ethical rules in this respect with a need to be compliant to them from start on
  • being able to ask people to cooperate and co-create on this most ethical basis.

We look forward to other, domain-related companies doing the same.

Together, we are stronger against the data-selling competition. Competition between ourselves (the ‘good ones’, if I may) will only urge each one to excel.

That’s a very good thing.

Leave a Reply

Related Posts

Mental Growth Beyond A.I.: Our Human Edge

This may become increasingly important in the future when super-A.I. can, in principle, do almost anything humans do nowadays. Of course, the issue is already crucial and has always been. Mental growth is intimately connected to meaningfulness and Compassion. This highlights the AURELIS commitment to growth, not as an optional luxury but as a fundamental Read the full article…

Why to Invest in Compassionate A.I.

Most A.I. engineers have a limited view on organic intelligence, let alone consciousness or Compassion. That’s a huge problem. Indeed, I’ve written a book about Compassionate A.I. See: “The Journey Towards Compassionate A.I.: Who We Are – What A.I. Can Become – Why It Matters” Am I trying to attract investors for this now? Or Read the full article…

Legal vs. Deontological in A.I.

The trolley problem This is a well-known problem in A.I. A trolley driver gets into a situation where he must choose between killing one person by taking a deliberate action or letting five others get killed by not reacting to the situation. Deontologically, people tend not to choose purely logically and statistically in such situations. Read the full article…

Translate »