Technological development is occurring at a rate that greatly surpasses our capacity to collectively conceive, consider, and decide upon its ethical implications. Ethical decisions fall in the hands of the individuals that own the means of such development. These individuals are not necessarily qualified for those decisions, but even if they were, every technological advancement affecting the general population, decisions on this matter should still be democratic. For each success achieved via —let us say— deep neural networks, there is a loss: the inference of the molecular structure of every protein known to man came with the use of facial recognition by the State —with the funding of hope-bearing research comes the wasteful allocation of talented minds and money to sterile and futile endeavors —and even with the most fascinating technologies, for each positive use one easily fancies a hundred perilous ones.

Insofar as this is true, no scientist concerning himself with the development of artificial intelligence can be a blind market actor. I do not see it as a requirement that his work should be ethically noble, since we require that from no other professions —the shoemaker and the farmer are not producers of ethical good. It suffices to require that we are being productive to society; this is, that our work enriches society as a whole, be it in small or large degree, just as we are all enriched by society. Our knowledge should strive for begetting or improving only those technologies that are not designed for oppression, that are not to be enjoyed only by a privileged few, and that may lighten the burden of existence for the whole of human kind —in whatever tiny way.

The extent to which computer science, and particularly artificial intelligence, can be used for this purpose —the improvement of human life— needs not be put out in detail. I have found scientific work, being a small and humble component of the great machinery of science, which sets out to understand and to ameliorate human suffering, and to cast the spell of ignorance away, is more than I could ask for. But this needs not be the case for everybody. Nothing of what was said here is a lecture on what to do; rather, it is a discourse on what to care about. Humane and honest ponderation upon the latter point should suffice to disclose a path for anyone.

The cynic's reply to all of this, I disregard. As long as there are objectively good and bad things —a tenet I here axiomatically accept, not being this the place to make its case— as long as there are things truly deserving of the epithets good and bad, I say, it will immediately follow a necessary concern for proliferating the ones worthy of the first, and diminishing those that bear the latter. The widespread idea that such an elementary concern should disappear in what comes to our role in society, in the name of a so-called natural tendency to maximize individual profit at all costs, is not even worth the attention of anyone sane enough to conceive the perils (and utter wastefulness) of blind technological advancement as it occurs today.

The meaning of any idea —the pragmatist claims— is the whole of its effects. I venture: the meaning of any discipline too.