Conversation
| action_flat_dim = np.prod(env.action_space.shape) | ||
| if self._max_obs_dim is not None: | ||
| obs_flat_dim = self._max_obs_dim | ||
| return gym.spaces.Box(low=-np.inf, |
There was a problem hiding this comment.
Why not just call akro.Box here?
|
|
||
| Args: | ||
| env (gym.Env): An env that will be wrapped. | ||
| max_obs_dim (int): Maximum observation dimension in the environments |
There was a problem hiding this comment.
This is a needed feature to actually use ML45, but I think it probably belongs in a separate wrapper. We can address that in a later PR after submitting this.
Codecov Report
@@ Coverage Diff @@
## master #1175 +/- ##
=========================================
+ Coverage 87.95% 88% +0.04%
=========================================
Files 183 185 +2
Lines 8711 8743 +32
Branches 1107 1110 +3
=========================================
+ Hits 7662 7694 +32
Misses 853 853
Partials 196 196
Continue to review full report at Codecov.
|
| from garage.envs.half_cheetah_vel_env import HalfCheetahVelEnv | ||
| from garage.envs.normalized_env import normalize | ||
| from garage.envs.point_env import PointEnv | ||
| from garage.envs.rl2_env import RL2Env |
There was a problem hiding this comment.
can this be part of the garage.tf.algos.rl2 module instead?
is there any way that RL2 can automatically wrap the environment with this wrapper?
There was a problem hiding this comment.
Yes I will put this inside rl2 class and make RL2 automatically wrap the environment with this wrapper
There was a problem hiding this comment.
Actually, the env never get passed to RL2 -- It only lives in the runner so RL2 can't wrap it.
There was a problem hiding this comment.
ah. this will have to wait for the refactor where the algorithm owns the sampler.
can you please file an issue about this?
No description provided.