| OS | Version | Working |
|---|---|---|
| Ubuntu 18.04 | ✔️ | |
| Windows 10 | ✔️ |
Python version: 3.6.3 ✔️
This repository will be used to provide the implementation of a training environment for RL agents for the UI adaptation problem.
It is recommended to create and/or activate your own Conda environment:
conda create -n UIrlhf python==3.6.3
conda activate UIrlhfThen install
cd RL-based-UIAdaptation
pip install -e .Then you can use it in your files by importing it
import gym
import ui_adapt
env = gym.make("UIAdaptation-v0")To try the environment, we are going to import it first, then we are going to render the env after each step.
import gym
import ui_adapt
env = gym.make("UIAdaptation-v0")
env.render() # You should see the UI-Context created
env.step(0, verbose=True) # No-operate
env.render() # You should see the same as before (since the action was no-operate)
env.step(1, verbose=True) # Changing to list
env.render() # You should see that the layout changed to 'List'The 3 main aspects of the RL agent are the state, the actions and the reward. Everything should be changed in the config.json file.
-
State: This JSON file MUST have a
USER,PLATFORM,ENVIRONMENTandUIDESIGNkeys in order to represent the state. -
Actions: It also MUST have an
ACTIONSkey in order to define what adaptations you have implemented. The actions must have aname, atargetand avaluerepresenting what is going to be changed (target) and with which property (value). -
Reward: the
compute_reward()function in theUIAdaptationEnvclass should be modified.
If you have an adaptive app, you can connect it to this environment through API calls. The API configuration should be defined in the API_CONNECTION key. More information about how to connect the API below (Still work in progress).
- Format:
- TODO...
- TODO...
TODO - formatting of the API... (The config file allows some personalization for the API calls but not entirely.)
First, install dependencies:
pip install tensorflow==1.10
pip install keras==2.1.6,
pip install keras-rl2Then,
- Go to
srcpath:cd src - Test with algorithms:
2.1
python QLearning.py- This will stop automatically 2.2python KerasRL_DQN.py- This will stop when you hit "Ctrl+C"
You can install libraries such as Keras-RL2, stable-baselines, RL-coach, etc. Then, create your own python program that uses an algorithm from one of these libraries and import the UIAdaptation Environment like in the #Installation section
There are still some work in progress. This is a prototype.
The reward function now only works with preferences + emotions. Possible future work:
- Get the usability from UI Design.
- Get Emotions from sources such as facial expressions, etc.
- Use Reinforcement Learning with Human Feedback. Check
rl-teacher-ui-adaptto see how to connect both and how to use it.