Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

My notes for the first 23 days of Misty

These are my notes/thoughts as working with Misty beta for the first 23 days.
The issues were already discussed on support. I have to look more at why
my movement on hard floors is not the intended direction.

   - received Misty
   - Unboxed, attached battery, optical sensor, camera, power cord
   - Ran into a problem:
           - Second board in head (connector1, Connector2) with the Ribbon Cable
           - The board is loose/floating
           - My guess is in the shipping the board shifted and the ribbon cable came loose
           - Tabs were still in locked position
           - I sought advice from the Misty Forums, they explained to go ahead and
             attach.  And informed me it goes to the middle connector (Conn2 with picture).
           - Then I could see the screen/face power up ... snapdragon -> eyes

 - Issues connecting to bluetooth directly complained about a PIN
           - I do not know the PIN
           - CompanionApp was able to connect (probably it has the PIN included)
             Companion app could drive the wheels.
           - I have WiFi setup (as I have the IP address, but from the CompanionApp
             it complains each time, if I try setting it.
             Failed multiple times to connect to WiFi:
               * 2.4 GHz - WPA2/WPA Mixed Personal - longish password with special characters SSID broadcasted
               * 5Ghz (same as above)  but not expected to work
               * 2.4 Ghz - guest Wifi, no security, Password simple (word with a number)
               (Each time I moved my phone to that network then tried to have Misty connect).
 - Attempted to connect to the IP address for Misty (in my gues
           - It requested a login/password, the video said for manual upgrade it would be in a document
             (I need to find that document :)  )
           - Found the default username / password for the IoT Core Dashboard is administrator / p@ssw0rd
             Should be in the intro doc with the startup

 - Documented with some of the pictures and simple steps for setup.

   - Tested the Mapping, using the companionApp
   - Drove about 8 feet and would no longer move
   - I picked it up and placed it back on the box on my desk
     Driving still did not work. (No friction on motors)
   - I let it cool/sit for a few hours, and reconnected with
     the companion app and the drive is working again.

   - Attempted to connect with my Linux box to Bluetooth, but it seems to 
     not work (likely due to the PIN that I saw on Android, but the CompanionApp worked).

   - Retested the Driving with Map and Tracking (after about 5 hours)
      * I drove for about 10 minutes no problems over the same area
        over 20 feet (slow drive of 18) it worked well on carpet.
      * Issues with direction, even movement on wood floors.
        I would be having it go straight forward in the app and it was doing a circle.
        And this was repeatable.  But start/stop/start seemed to work better to keep
        it going in a line and adjusting to counter the direction.
   Errors with the scripts: 
    1. Permissions are not correctly set in the zip file
       (executables/scripts should all be at least 744 or 755)
       cd Manual-Update
       chmod 755 *.sh platform-tools/adb
    2. There were stray characters in the file (probably done in Word or some tool
       that does not correctly save text).
       I used sed to clean it or you can just delete the 3 or so [control-V control-M] entries
       (It is the bad translation of a return normally from Word or similar tools)

  * A silly little concern.. (user error)
    I was thinking a lost some pixels on the LCD, but it was just I had forgot to move
    the mouse before I disconnected it the previous day.  And even on reboot the mouse
    'cursor' would show up as white right between the eyes but it is tiny.  So just
    plugin the mouse and move the cursor.  (sigh)

  * Just messing around with APIs (web, javascript, python and json)
  * Testing each seeing what each offered and the style
  * Looking for a more 'brain' version to feed in stimulus/response to a live engine/daemon.

 * Tested out Matt (station) STT/TTS
    - The SST - speech to text - done on the computer side from the web server
    - The TTS text to speech from Station (Matt) - it works
      Text converts to wav on the computer, wav file is copied to Misty and played.
    - My 6 year old also tested SST and TTS and enjoyed harrassing me by having the
      robot say things to me..  :)

   FYI: On linux running web server I just copied:
      1. Download the API Explorer:
      2. Unzip to my web directory:
      3. I copied (station) Matts file to /var/www/html/api-explorer:
         cp misty_speech_example.html /var/www/html/api-explorer/Misty.API/speech.html
      4. Open the browser (on linux) to:
            Enter the "IP Address:"      <click connect>
            Select the voice type <male|female>  Voice pitch, Voice Rate
            Click the microphone (to do voice recognition) or type phrase, and click send.

  * Head down again
  * My goal is to have:
     - the daemon/engine running listen and respond, and if needed go to Google for
       Speech to Text, the Text to Speech will depend on the voices and effects you
     - Journal's are speech to text (STT) and I would prefer seeing that done without
       going to a web site.  (But my thought is where do we store the journal SD Card
       may be a option or the drive).

2018-05-03 to 2018-05-10
  * I was bad and did not keep notes..

 * Setting up my Linux notebook as the 'brain' since I need a daemon to listen and to
   do real development I need it on some flavor of Unix directly connected daemon, and
   and it can communicate to Misty sensors via JSON and copies even though it is slow.
 * Working on converting HTML to javascripts (programming style library) that can be called
   from higher performance code for doing AI/DL/NLP/NLU.
    - My goal is to have a daemon running, unfortunately it needs to be on a external computer
      currently since the intermediate computer is Windows which is not compatiple with most
      software and programming that I use (OpenSource software).
    - Also I want more of a Brain/Personality development than web interface to remotely control
 * Microsoft Windows update 10-15 minutes.
    - It stated Last Windows Update failed (2018-05-12 12:52 pm CT )
    - However, Misty was still working
 * Head dropped again.  (2018-05-12 20:45/8:45pm CT)
    - I could not connect with the CompanionApp
      (More specifically: Bluetooth Connected, No IP address seen, and no control, capabilities not listed)
    - I could not connect with teh Misty API Explorer
    - Looking at the <IP ADDR>:8080 - shows it is still there, IP address is up (obiviously since
      I am connecting through it).
           - No motor, eye or LED change response.
           - Web said it was changing LED, Changing Eyes.
           - But at times complained about the connection issues
           - I tried the Python API which has reliably worked and that just hangs and I can get
             additional debug from Python. (Basically it just fails to establish a new connection)
        1. Turned off for 1 minutes
            - Returned power on the back and on the side board.
            - Booted, head remained down, eyes showed up
            - Web to <IP>:8080 showed the IP adress/bluetooth up
            - CompanionApp could not find Misty (I could see Bluetooth and re-pair with bluetooth outside
              of the CompanionApp) but the CompanionApp would not find Misty (even though bluetooth pair outside worked)
        2. Rebooted Misty again (power cycled) and powered on the side board
            - Eyes came on (head still down)
            - Finally Companion App connected (I see Capabitilties, etc).
            - Drive motors work
            - Now api-explorer also works, eyes, LEDs, and manual drive
            - websocket works and battery at 12.158
            - Head still down
        3. Unplugged the small white connector from battery to board
           Waited 1 minute, plugged back in, no change.
        4. Unplugged the main power from battery, no changes
             - eye changes still work
           Waited 1 minute, plugged back in, plugged back in and rebooted eyes, default lights,
             head still down.  Reconnected IP and confirmed up.  Now eyes/LEDs do not change either.
        5. Turned off power and pulled only the main power (Battery plugged in)
            - Tried to power on without power, and no power/LEDS/boot.
            - Waited 30 seconds
            - Plugged in, powered on, LEDs on, booting, head came up.
             * eyes on for about 30-45 seconds before Windows up/responsive from the <IP ADDR>:8080 web interface.
             * Not connecting via api-explorer
        6. Powered down Misty, wait 30 seconds and power up (and head went up also).
             * api-explorer now connects and works to change eyes, LED, motors work
    2018-05-12 22:45/10:45pm CT
	- head dropped again (still sitting on the box)
        - wifi connection from MistyAPI's stopped working
        - wifi to IPaddr:8080 was working
        - powered all down, powered unplugged and unplugged battery
    2018-05-13 - 2018-05-16:
        - Working on Linux to find a reliable Speech-to-Text that does not go to the internet
          (or a reliable secure non-shared data site, google, amazon all read your data so not a
           valid solution for journaling for many people).  I am pretty wide open personally and
           do not have any privacy concerns, but I have some friends that will not allow a Google
           Home or voice recognition due to privacy.
            * sphinx, pocketsphinx were okay but not perfect on my notebook
            * jasper on RaspberryPi zero - issues - but could be issues with the mic's also.
            * Google Voice - works fine (on my linux box which is what I have from Matts web interface to
              Misty - it:
                1. Records from my Linux box
                2. Generates the wav
                3. Sends to google 
                4. Google geerates the text and sends that back..
                   (or the above steps can just be takes the text input)
                5. Then send text to Google to generate a wav file
                6. Sends that back to my desktop
                7. Data copied/sent to Misty
                8. Data Played by Misty

        - Working on Linux to write a 'brain' (use of term VERY loosely)
           * Write the Brain on linux and use the APIs to control Misty 
             Not the most efficient but deveopment on Windows is slow
           * Just first stimulus (read sensors, inputs, not really live audio
             since it is Windows on Misty). The live Audio would be from the Linux computer.
             (And I may evenutally just attach a Raspberry Pi to handle that part)
            [I would let someone else port to Windows, or hopefully replace Windows with Linux.
             It seems Misty II still has Windows, I understand it sometimes takes some
             work to get drivers ported with vendors that do not write linux drivers first, but
             that is normally part of purchasing decisions in hardware or rely on community support.
             Something we have dealt with for many years of bad hardware like WinModems, Dumb printers,
             etc. taking us back in time before we had standardized printer drivers in the 1980s/1990's].

        - Ian mentioned
          (Feature Breakdown: Misty II Teardown) and a new video arriving 2018-05-17)
        - I updated my notes with things already covered/resolved in Misty II
        - More work on various Brain software looking at what was already written, I am
          trying with javascript but concerns on performance in the end for either python or javascript.
          For now it should be fine.

Wow, @markwdalton. You take great notes! Flagging your notes for @Jane. Mark is a great resource for detailed user feedback…

Thanks @donna for bringing me into the conversation. And @markwdalton sorry for my long delay in responding. I’m just coming up for air after the 30 days crowd funding craziness! Thanks so much for your detailed feedback. I’d love to connect and have a conversation to get more feedback from you as we plan for new features. But for now, if you had to list two things that you think are missing from Misty, what would they be?

No problem on timings, I know people at Misty are busy and Misty II coming out.
1-2 things is hard, since you have to look at a design/development as a System
what are your end goals, then you list capabilities, and features needed. Then
you pick the next projects with the end goals in mind (so as to not limit your future
design/expandability, with in constraints of time/costs, but try to leverage that for future also).

I recognize my top two may not be the same, but it is part of a bigger picture.

The below are needed for:
- user security - voice and face verification - associate with profiles
- personality development
- interaction - with owner’s would ideally be through voice (and video)
- security, speed of software development and openness of software development.
(or perhaps Misty has a deal with Microsoft which would be the reason to use it)

  1. Voice response interface on Misty - NLP/NLR

    • Configurable wake word and/or always on (as we are, which contributes to our learning)
    • This is the most basic feature needed for developing personality
    • Main interface in ‘interaction’ with others.
    • voice recognition/verification - family voice profiles and recognition (to simplify)
    • yes, putting a google home/echo dot could be done, but using the
      only expansion for the backpack, and all the hardware is on Misty.
  2. Linux on the main interface/CPU.

    • This would have enabled the Voice - NLP/NLR/image recognition/object detection
      quickly done.
    • This pretty much halted me in developing on Misty directly, and I am
      doing the development on Linux and then sending the simple
      pull data/push action responses to Misty to do. And trying to find good
      ways to capture live video/audio to do the same things. (voice
      recognition/NLP/NLR, and object/face recognition).
    • Another option would be for Misty to have the main daemon interface/API
      that we can add ‘watches’ to notify and respond to requests.
    • I am working on figuring the best way to catch live audio/video to do remote NLP/NLR/images

Longer-term (it of course depends on Misty decisions on direction), these are just thoughts, wishes, my interest is in a Full AI/autonomous robot. That can learn from others, learn skills through observation and develop personality. A robot you can actually have a conversation with, and can move with you.

  • Personality development
    • Profile for self - as personality develops for language use, and interpretation NLP
    • Profile for individuals - ‘family’, friends, strangers, perhaps even categorization of people/objects beyond ‘names of things’ to preferences, trust levels, level of influence on ‘decisions’ or modifications of personality.
  • Exchange-able frame, for wheels/tracks/legs, and for arm frame. Keep the ‘neck’/‘body’ and
    the head as the ‘marketing signature’ of the Misty look.
    * Think of it this way:
    - Larger wheel - platform to get over other terrain
    - Large track Misty - Number 5/Johnny 5
    - Misty Dog - secure Misty to the leg frame.
  • Arms need stronger attachment for lifting.

For now I am using PVC pipe to build a small frame on the base
of Misty, which I can attach to a PVC frame for legs/wheels/track.
And then I need to detach the control to the wheels and reattach
to the base frame legs/wheels/tracks (separate power, controller
likely a raspberry Pi with simple API for movement:

move(Type, Direction, Speed, Duration/Distance)

  • TYPE: walk, run, stop, trot, canter, gallop, dance, crouch, lay down, sit
  • Direction: 0-360 degrees
  • Speed: 0-100
  • Duration/Distance: either duration in milliseconds/seconds or distance in mm/cm (or convert)

I think it would be useful to have Misty be able to used in education/experiments/applications
with different movement systems.


Thanks for all this, it’s awesome! I will add this to our roadmap and keep you posted on progress. If you have any other ideas you’d like to share, be sure to add to our public portal here: Robot Roadmap | Misty Learns New Skills and Gets Better All the Time . You can also vote on what’s already there.

Thanks again!


I really like your exchangeable frame idea