Tried to embed a graphic, but it didn't work; there's a nice shorthand for a bit more detail of what I'm talking about (link below).
The IP issue easy to dismiss if you aren't an artist. Saying "well, the company just stole a lot of stuff so we just have to live with it" is a pretty irresponsible approach--and remember, the conversation a…
Tried to embed a graphic, but it didn't work; there's a nice shorthand for a bit more detail of what I'm talking about (link below).
The IP issue easy to dismiss if you aren't an artist. Saying "well, the company just stole a lot of stuff so we just have to live with it" is a pretty irresponsible approach--and remember, the conversation about hypothetical future harms is being led by the same people who committed that theft. Saying "problems existed before and if the regulation isn't good enough to reduce harms" is similarly irresponsible in my opinion--especially since the bias is hidden in the dataset, which isn't visible to users or regulators. Putting companies in charge of mitigating future harms that haven't been attentive to current harms doesn't seem like good planning, and yet who is at the table, nationally and internationally, to mitigate imagined future harms? The CEOs of OpenAI, Anthropic, etc. So. That's some of why some people are pretty worked up about the focus on *hypothetical* futures (for capabilities that don't yet exist) v. past and current harms.
Tried to embed a graphic, but it didn't work; there's a nice shorthand for a bit more detail of what I'm talking about (link below).
The IP issue easy to dismiss if you aren't an artist. Saying "well, the company just stole a lot of stuff so we just have to live with it" is a pretty irresponsible approach--and remember, the conversation about hypothetical future harms is being led by the same people who committed that theft. Saying "problems existed before and if the regulation isn't good enough to reduce harms" is similarly irresponsible in my opinion--especially since the bias is hidden in the dataset, which isn't visible to users or regulators. Putting companies in charge of mitigating future harms that haven't been attentive to current harms doesn't seem like good planning, and yet who is at the table, nationally and internationally, to mitigate imagined future harms? The CEOs of OpenAI, Anthropic, etc. So. That's some of why some people are pretty worked up about the focus on *hypothetical* futures (for capabilities that don't yet exist) v. past and current harms.
https://www.universityaffairs.ca/opinion/in-my-opinion/chatgpt-we-need-to-talk-about-llms/