Commit f4210686 authored by Christian Willms's avatar Christian Willms
Browse files

updated config

parent 9e15ac0a
......@@ -14,124 +14,32 @@
# make sure bin/vondac is in PATH
cd..
git clone git@mlt-gitlab.sb.dfki.de:willms/Intuitiv.git
cd Intuitiv
git checkout developer
git clone git@mlt-gitlab.sb.dfki.de:willms/vonda_base.git
cd VondaBase
git submodule init
git submodule update
./install_submodules.sh
#Test
./compile
sh ./compile
./run.sh
# Conversion of .osm files into .owl resp .nt files
First, clone and compile (`mvn install`) the osm2owl project:
git clone git@mlt-gitlab.sb.dfki.de:chbi02/osm2owl.git
Use the `convert.sh` to do the conversion, for the DFKI test data like this.
./convert.sh ~/data/ontology/osm/osm.owl ~/code/src/test/resources/DFKI-3.osm osmdatadfki
**Attention, your path to osm.owl or \*.osm may vary.**
It will put the resulting `.owl` file into the place where the `osm.owl` lives,
it's a good idea to supply the (optional) output name (here: `osmdatadfki`) so
it ends up in its own directory.
# Instructions for Rolland-Simulation
### Setup
First, setup ubuntu repositories for ROS according to:
<http://wiki.ros.org/melodic/Installation/Ubuntu>
(And possibly also for [gazebo](<http://www.gazebosim.org/tutorials?tut=install_ubuntu>), should not be necessary)
sudo apt install ros-melodic-desktop-full
sudo apt install ros-melodic-ddynamic-reconfigure
sudo apt install ros-melodic-depthimage-to-laserscan
sudo apt install gazebo
git clone git@github.com:bkiefer/audio_common.git
git clone git@git-int.hb.dfki.de:intuitiv/intuitiv_simulation.git
git clone <https://github.com/IntelRealSense/realsense-ros.git>
git clone <https://github.com/pal-robotics/ddynamic_reconfigure.git>
git clone <https://github.com/srl-freiburg/pedsim_ros.git> *branch* melodic-dev
Visit this site and install the developer and debug packages,
`dkms` and `de.dfki.intuitiv.utils` are **not** needed:
<https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md>
On this page, check installing the packages:
sudo apt-get install librealsense2-dev
sudo apt-get install librealsense2-dbg
### Build the Intuitiv simulation etc.
cd catkin_ws
source devel/setup.bash
catkin_make
Start the simulator (gazebo) with
`roslaunch intuitiv_simulation saarschleife_og.launch`
to start localisation:
`roslaunch intuitiv_simulation localisation.launch`
Now set the initial pose in Rviz, so that the localisation guess is aecceptable.
To do so, you have to select the Fixed Frame map in rviz. To steer the robot,
you can use, e.g.:
`rosrun rqt_robot_steering rqt_robot_steering`
## Testing dialogues
First compile all rules, then start the test interface using `./run.sh`.
Source your RosWorkspace and start all components using `ŗos launch spreechprocessing speechprocessing.launch`
Enter the following statements into the test interface:
First compile all rules `sh ./compile`, then start the test interface using `./run.sh`.
Enter the following statement into the test interface:
setDestination(-17.62, -2.45)
setTask(-17.22, 0.91)
setLocation(-17.619, -2.45)
Hallo
## Setup ROS Interface
To setup the ROS components of this project
1. Install `catkin_virtualenv` in our ros repository http://wiki.ros.org/catkin_virtualenv
2. Download and install the deepspeech speechrecognizer package for python
git clone https://github.com/fossasia/speech_recognition.git
make sure to checkout the deepspeech branch
to install the package just navigate to its root directory and run `python3 setup.py install`
3. please move the `ros\speech_processing` folder into your ros workspace.
The resulting structure should look similar to:
Now you should be greeted back by the system using an appropriate statement, such as
Hallo
Mahlzeit
Guten Abend
catkin_ws\
- devel\..
- build\..
+ src\
- speech_processing\
- launch\
- scripts\
CMakeList.txt
package.xml
requirements.txt
You can also use the command `testDias()` in the userInterface to print out the realisations of all
emitDA statements in test.rudi
##[Dialogue acts provided by the NLU](./doc/dialogueActs.md)
......@@ -139,23 +47,11 @@ The resulting structure should look similar to:
## Docker Images
To start the Vonda Docker run
docker run --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" test_intuitiv:latest
find sound outputs: `cat /proc/asound/modules` then test sound `speaker-test -c2 -twav -l7 -D plughw:1,0`
docker docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb tts_intuitiv:latest
docker-compose up --build
if you encounter any errors related to X11 try to run `xhost +"local:docker@"` on your host system
## Use the GUI to test
1. create a task setTask(Valerie Poser, POI_1, POI_2, Blutabnahme)
setTask(Valerie Poser, POI_J1, POI_J2, Blutabnahme)
setLocation(2.53, -0.175, 0)
// the robot should now drive to POI_1
2. set location setLocation(12.0, 33.0, 0)
3. set POI reached reachedPOI()
// trigger the greeting dialogue
......
......@@ -4,16 +4,12 @@ services:
dialog_manager:
build:
context: .
image: ${DOCKER_REGISTRY_ADDRESS}/dialog_manager:latest
container_name: ${PROJECT_NAME}_dialogue_manager
image: vonda_base:latest
container_name: vonda_base
environment:
- ROS_MASTER_URI=${ROS_MASTER_URI}
- ROS_IP=${ROS_IP}
- DIA_IP=${DIA_IP}
- DISPLAY=${DISPLAY}
volumes:
- ./config.yml:/config.yml
- ./poi.nt:/data/ontology/poi.nt
- /dev:/dev
- $HOME/.Xauthority:/root/.Xauthority:rw
privileged: true
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment