https://www.noahpinion.blog/p/why-trying-to-shape-ai-innovation/comment/17173499
Noah Smith discusses the effects of AI on workers and specifically “redirecting” AI to favor workers. He thinks that is a bad idea. I would say it is not so much a “bad” idea as a “null” idea; no one knows how to do it. It seems to be a way of sounding friendly to workers without seeming anti-AI. But what could it mean?
There are mathematical representations of the relationships between output and the amounts of labor and capital employed, and these imply the percentage increase in output would increase if labor, for example, rose by 1%. This “marginal product” can be interpreted as the wage and therefor the share of output going to labor. One might compare two such formulas representing the relationship with and without AI. Depending on the parameters of the formulas, AI could raise or lower the wage and the share of output going to labor and depending on the amount of output the value of labor’s share. [I’ll spare you and myself the effort to dust off our calculus to show this. :)]
In this way of thinking, to worry that AI will harm workers is to worry that the parameters will change in such a way that the AI formula yields a lower total value of the labor share. “Redirecting” would mean making sure the parameters of the AI version of the formula yield a larger value of the labor share of output.
But even assuming that technologies with such parameters exist (and how could anyone know?), what incentives could be created so that investors would choose those “labor-friendly”[i] parameters instead of others? Such considerations inform Smith’s reservations about redirection.
Instead, Smith recommends changes in the relations of labor and management of firms (he uses the example of union representation on company boards of directors) such that firms would have some incentive to avoid merely labor replacing technologies and labor would have some incentives to embrace new technologies that raised incomes of the firm. While I agree that is desirable, I doubt anyone knows much more about how to bring about those improved relations than they do about incentivizing the choice of more labor friendly technologies directly. I
Smith’s other idea is a more robust welfare state that goes beyond social insurance (health and unemployment insurance, retirement benefits, and means-tested transfers) to more explicit forms of redistribution like more generous Earned Income Tax Credit or Child Tax Credit, or (I’m not a fan) Universal Basic Income.
I would add two other considerations.
A) The effects on AI on workers is likely to be better in the context of other policies that promote rapid growth and
B) The existence of a robust welfare state ought to reduce political resistance to employing AI.
[i] And this glosses over the fact that “labor” is highly diverse; the CEO’s efforts are “labor” as much as the janitor’s.