zlacker

[parent] [thread] 0 comments
1. dmitry+(OP)[view] [source] 2023-06-28 16:49:13
A long time ago, when I was working on the initial implementation of android wear at Google, I actually worked with the code that does stuff like this, so I might be able to answer somewhat usefully. At least there, the way it worked, is that every 30 seconds, the device would wake up the accelerometer and collect 2 seconds of data at 120(?)Hz. After that, there was a relatively large decision tree based on the values, their derivatives, etc. This decision tree was an output of a large trained model, but was itself pretty small: a few thousand values. It could only classify things it was trained on - the output was an activity index. At Google at the time, the supported activities were: walking, biking, running, driving, sitting, unknown. The model cannot output anything other than the activity index.

The practical upshot: Could one detect such activities based on accelerometer data? Surely yes. However, unless somebody trained it on masturbation, it is unlikely that that is an actual possible output of it.

Details: model format was more or less this

    node {
      int activity; //positive if this is the terminal
         //node and this is the answer, else this is
         //not terminal. Then it is the index of the
         //input sample to read (times minus one) to
         //compare to the next value
       float compareWith;
       unsigned gotoNodeIdxIfLessThan;
       unsigned gotoNodeIdxIfGreaterOrEq;
     }

     model {
       node nodes[];
     }
You’d start at node [0] and walk the tree as per comparison instructions (index of input samples and float to compare to) till you reached a terminal node.
[go to top]