Readme.md 4.87 KB
Newer Older
Christian Willms's avatar
Christian Willms committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
##[Diverse Notizen, Implementierungsfahrplan, etc.](./doc/notes.md)

# Installation of the Intuitiv project (including Vonda for Rolli compilation)

    #install apache thrift (version 0.12.0 or higher) on your machine
    #for linux:
    

    git clone https://github.com/bkiefer/vonda.git
    cd vonda
    git checkout developer
    ./install_locallibs.sh
    mvn clean install
    # make sure bin/vondac is in PATH
    
    cd..
    git clone git@mlt-gitlab.sb.dfki.de:willms/Intuitiv.git
    cd Intuitiv
    git checkout developer
    git submodule init
    git submodule update
    ./install_submodules.sh
    #Test 
    ./compile
    ./run.sh


# Conversion of .osm files into .owl resp .nt files

First, clone and compile (`mvn install`) the osm2owl project:

    git clone git@mlt-gitlab.sb.dfki.de:chbi02/osm2owl.git

Use the `convert.sh` to do the conversion, for the DFKI test data like this.

    ./convert.sh ~/data/ontology/osm/osm.owl ~/code/src/test/resources/DFKI-3.osm osmdatadfki
 
**Attention, your path to osm.owl or \*.osm may vary.** 

It will put the resulting `.owl` file into the place where the `osm.owl` lives,
it's a good idea to supply the (optional) output name (here: `osmdatadfki`) so
it ends up in its own directory.

# Instructions for Rolland-Simulation

### Setup

First, setup ubuntu repositories for ROS according to:

<http://wiki.ros.org/melodic/Installation/Ubuntu>

(And possibly also for [gazebo](<http://www.gazebosim.org/tutorials?tut=install_ubuntu>), should not be necessary)

    sudo apt install ros-melodic-desktop-full
    sudo apt install ros-melodic-ddynamic-reconfigure
    sudo apt install ros-melodic-depthimage-to-laserscan
    sudo apt install gazebo

    git clone git@github.com:bkiefer/audio_common.git

    git clone git@git-int.hb.dfki.de:intuitiv/intuitiv_simulation.git

    git clone <https://github.com/IntelRealSense/realsense-ros.git>
    git clone <https://github.com/pal-robotics/ddynamic_reconfigure.git>

    git clone <https://github.com/srl-freiburg/pedsim_ros.git> *branch* melodic-dev


Visit this site and install the developer and debug packages,
`dkms` and `de.dfki.intuitiv.utils` are **not** needed:

<https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md>

On this page, check installing the packages:

    sudo apt-get install librealsense2-dev
    sudo apt-get install librealsense2-dbg


### Build the Intuitiv simulation etc.

    cd catkin_ws
    source devel/setup.bash
    catkin_make

Start the simulator (gazebo) with

`roslaunch intuitiv_simulation saarschleife_og.launch`

to start localisation:

`roslaunch intuitiv_simulation localisation.launch`

Now set the initial pose in Rviz, so that the localisation guess is aecceptable.
To do so, you have to select the Fixed Frame map in rviz. To steer the robot,
you can use, e.g.:

`rosrun rqt_robot_steering rqt_robot_steering`

## Testing dialogues

First compile all rules, then start the test interface using `./run.sh`. 
Source your RosWorkspace and start all components using `ŗos launch spreechprocessing speechprocessing.launch`
Enter the following statements into the test interface: 

    setDestination(-17.62, -2.45)
    setTask(-17.22, 0.91)
    setLocation(-17.619, -2.45)

    
## Setup ROS Interface
To setup the ROS components of this project
 1. Install `catkin_virtualenv` in our ros repository http://wiki.ros.org/catkin_virtualenv
 2. Download and install the deepspeech speechrecognizer package for python 
 
    git clone https://github.com/fossasia/speech_recognition.git
    
    make sure to checkout the deepspeech branch
    to install the package just navigate to its root directory and run `python3 setup.py install`
 
 3. please move the `ros\speech_processing` folder into your ros workspace.  
The resulting structure should look similar to:


     catkin_ws\
        - devel\..
        - build\..
        + src\
            - speech_processing\
                - launch\
                - scripts\
                CMakeList.txt
                package.xml
                requirements.txt


##[Dialogue acts provided by the NLU](./doc/dialogueActs.md)

## Docker Images
 To start the Vonda Docker run 
 
     docker run --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" test_intuitiv:latest

find sound outputs: `cat /proc/asound/modules` then test sound `speaker-test -c2 -twav -l7 -D plughw:1,0`


     docker docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb tts_intuitiv:latest

if you encounter any errors related to X11 try to run `xhost +"local:docker@"` on your host system

## Use the GUI to test
    1.  create a task setTask(Valerie Poser, POI_1, POI_2, Blutabnahme)
    setTask(Valerie Poser, POI_J1, POI_J2, Blutabnahme)
    setLocation(2.53, -0.175, 0)
    // the robot should now drive to POI_1 
    2. set location setLocation(12.0, 33.0, 0)
    3. set POI reached reachedPOI()
    // trigger the greeting dialogue