#StackBounty: #reinforcement-learning #embeddings Auto-Encoder to condense (pre-process) large one-hot input vectors?

Bounty: 50

In my 3D game there are 300 categories to which a creature can belong.
I would like to teach my RL agent to make decisions based on its 10 closest monsters

So far, my Neural Network input vector is a concatenation of ten 300-dimensional one-hot encoded vectors. It would be awesome if I had, say ten 40-dimensional vectors instead.

So, is it possible to pass a 300-dimensional one-hot encoded vector through an auto-encoder, to snatch its compressed version (embedding) from the middle of the auto-encoder?

This would allow me to concatenate several of such compressed embeddings as a “total” input (ten concatenated 40D vectors), without bloating this total input vector.

In my game, each one-hot is supposed to represents a distinct category. My categories are not correlating in any way.

1) Will it be valid, or will it introduce unwanted assumptions, like Label-Encoding does?

2) How to train this Auto-Encoder? Can I simply pass my one-hots through it, and fine tune until I get an exact one-hot on the other end of the Auto-encoder?

3) Is there a better way to describe several monsters?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.