Selling Data is Selling Soul

August 15, 2019 Artifical Intelligence, General Insights No Comments

… if the data are personal and in a big data context. It’s like a Faustian deal, but Faust only sold his own soul. Where is Mephistopheles?

Big data + A.I. = big knowledge

Artificial Intelligence is already so powerful that it can turn much data (passive, unrelated) into knowledge (active, related). ‘Knowledge is power’ becomes ever more scary in this setting. It’s about big knowledge.

And it has barely begun.

This is knowledge about how humans can be influenced… and manipulated.

Guess what a party intends to do when paying for data?

An advertising company wants to make people buy more stuff, irrespective of real needs. Whether this extreme way of putting it here is realistic or not, isn’t relevant. It’s realistic enough.

In politics, in religion, in amassing just plain personal power… You name it. The possibilities for manipulation are real and underway, if not already realized somehow.

And it has barely begun.

Gone is privacy

As one dire side-effect.

The knowledge which can be extracted from personal big data nowadays is such that people can be emotionally targeted almost at the individual level without needing access to names or identification numbers of any sort. This is: you are being modeled quite accurately. Then this model is targeted. Not you? Well… the model is almost like your second identity.

As far as concerns the advertising company, it is you.

As far as I know, there are not yet any regulations in this respect.

With Lisa, this will be more important than ever

Lisa gets a deeper knowledge about people in general and individuals in particular.

Let’s not joke about it: this can be used for good and for bad. It can be used by Lisa herself and – eventually – by anyone who gets the data=knowledge.

This makes ethics immensely important!

AURELIS will never sell personal data

A pledge:

As part of AURELIS / Lisa ethics, we will never sell any user data, nor work with a company that does not explicitly state the same.

It may be clear why this is a very strict rule.

We forsake the income of selling data, even while we “could do much good with the money.” Faustian mythology makes us extremely careful.

This is a challenge within a very competitive world. We need to be better than merely idealistic. So, what we can – hopefully – gain from our stance, among other things:

  • developing a company culture of ethics, attracting good people with strong ethical motivation and who recognize each other in this
  • showing to the world that we mean it, thus building a position of trust and cooperation with other trustworthy organizations
  • being sustainable at long term, when legislation demands the strictest ethical rules in this respect with a need to be compliant to them from start on
  • being able to ask people to cooperate and co-create on this most ethical basis.

We look forward to other, domain-related companies doing the same.

Together, we are stronger against the data-selling competition. Competition between ourselves (the ‘good ones’, if I may) will only urge each one to excel.

That’s a very good thing.

Leave a Reply

Related Posts

Open Letter about Compassionate A.I. (C.A.I.) to Elon Musk

And to any Value-Driven Investors (VDI) in or out of worldly spotlights. This is a timely call for Compassionate A.I. (C.A.I.) Compassion and A.I. are seldom mentioned together. Yet C.A.I. may be the most crucial development in the near as well as far-away future of humanity. Please see my book about the Journey Towards Compassionate Read the full article…

Can Motivation be Purely Conscious?

Motivation as we know it is present in a system (you, me) that is partly conscious, partly non-conscious. Thus, the question is much more difficult than it appears at first sight. Nevertheless, towards future A.I., it will need to be solved. Purely conscious? This is also purely (even though possibly partly fuzzy) conceptual. Motivation would Read the full article…

Why Superficial Ethics isn’t Ethical in A.I.

Imagine an A.I. hiring tool that follows all the rules: no explicit bias, transparent algorithms, and compliance with legal standards. Yet, beneath the surface, it perpetuates systemic inequities, favoring candidates from privileged backgrounds and reinforcing the status quo. This isn’t just an oversight — it’s a failure of ethics. Superficial ethics in A.I., limited to Read the full article…

Translate »