The language thing made me think of this:
I think we need to assume that any definition of AI has to inherently include a definition of intelligence. Whether or not something occurs naturally or by man-made means isn’t very relevant. Something is either intelligent (read sentient) or it isn’t.
From there is where all the questions come. The reason the “artificial” part comes into play is that it affects the discussion and reaction that follows. In our society we gain ownership of things that we create. But to me, something that is sentient cannot be owned. Post-Renaissance philosophy has brought about the idea of God-granted (or natural, if you prefer) unalienable/inalienable rights (life, liberty, property). So when a sentient alien comes about, we (I hope) assume that the extra-terrestrial has said rights.
But if we create a machine/device/thing that is sentient, can we endow those rights upon them? Asimov seems to imply that we could, but we won’t (the 3 laws restrict liberty and life). On the other hand, in “I,Robot” there’s discussion of the Positronic brain being a thing of complete mystery to the engineers that produce it…
So that’s my take. if it’s sentient (and that I really can’t define), it doesn’t matter if it is natural or artificial. What matters is the discussion that comes after that.
There’s a book I read last year called “His Robot Girlfriend”. It’s a book about an AI that ends pretty happily. Give it a shot: