My RSoC experience at Wyliodrin is getting better every day. We just walked into the last month of the school program and I am glad to share with you my recently hot accomplishments.
In the last post I have told you how I took care about the Analog I/O part, what technologies I used and the difficulties that popped up.
Since then, first of all I had to do some code refactoring and redesigned the table that keeps information about the pins on udoo. I tested again all features available until that moment, too.
Secondly, I focused on the Servo part. Servo allows the users to control their servomotors. Due to the special architecture present on udoo with two processors, an Arduino-compatible one and an iMx6, the existent Servo library on libwyliodrin is not compatible and I used firmata protocol again to make Servo work. I implemented two functions, servo_attach() and servo_write().
Also, I took care of the I2C serial computer bus and coded all the functions that will allow data to be sent and received using i2c.
You can follow my entire work on Github.
Stay tunned for my next post!
Greetings from IP Workshop and hello again!
This is my second post on the blog since the RSoC 2014 started and I am very excited about how things are going. Me and Matei did some very interesting stuff during this time and learned a lot about our project and I think the way we approach new challenges shows that.
Over the past weeks I managed to deal with the most provocative challenge until now – coding the Analog I/O functions. There are two processors on the UDOO board: Freescale iMx6 and Atmel Sam3x. The problem was that the user has access only to the iMx6 processor and Analog I/O cannot be controlled from there. Therefore, I spent a long time documenting on this topic and I realised how the processors can communicate with each other. I used firmata and serial port to make them work. 
I have also implemented the Time part and some of the Advanced I/O functions that allows you to work with a shift register. I made a few tests and code refactoring, too.
I am looking forward to successfully completing the wiring library coding part. You can find the reference about the wiring library here: .
Right now, I am participating at the IP Workshop  summer school at Tirgu Mures organized by my mentors and I am making the best of it, trying to learn as much as I can. I attend the Internet of Things course. It is a good time to test all the features that I implemented on the UDOO board till now and ask about every vagueness.
Stay tunned for my next post!
It’s been another two weeks of “coding for decoding” for me.
What did I manage to do since my last post?
Things look really good by now, I’m making constant progress. A very important thing I managed to implement is collecting adaptation data from a Result object. I also implemented the part that uses this data for creating the adaptation file.
I must say that after seeing in the first weeks that the algorithm which estimated the transform was working well, but reading counts from a file generated with sphinxtrain, this was the next big step, to collect counts from the result of the first decoding process. This task is the part that took me the most by now.
I also implemented a component that based on the adaptation data creates a new means file. This is equivalent to having a new adapted acoustic model that will decode better if used with audio files containing speech of the persons that adaptation was made for.
You can see my work at:
- https://github.com/bogdanpetcu/sphinx4/tree/master/sphinx4-core/src/main/java/edu/cmu/sphinx/decoder/adaptation - adaptation package that I implemented from scratch.
- https://github.com/bogdanpetcu/sphinx4/commits/master - here you can see all of my commits
Have a great week!
My name is Victor Ciurel and this summer I will be working on OpenSIPS. More precisely, I will implement a module that will allow OpenSIPS to communicate with SMPP(Short Message Peer-to-Peer) servers.
OpenSIPS is an Open Source SIP proxy/server for voice, viceo, IM, presence and any other SIP extensions. The module I will implement will allow a SIP device and a SMPP device to communicate through messages.
With the help of my mentor, Razvan Crainea, I established the flows for the communication from SIP to SMPP and vice versa. We also chose and tested a SIP client software (linphone ) and a SMPP library (C Open SMPP v3.4 ), which I will use in order to represent the SMPP messages.
Razvan sugested that I get familiar with module implementation and the structures used in OpenSIPS by watching a webinar  and implementing my very own module, that printed parameters given in the opensips configuration file. Having finished this print module, I started implementing the actual module that will be used for SIP/SMPP communication. I am working now on the SMPP -> SIP translation. So far, I have implemented a bogus SMPP server and connected my OpenSIPS module to it.
In the following week I will finish documenting on the SIP and SMPP structure and work on the SMPP -> SIP translation.
See you next time.
My name is Georgiana Chelu and this summer I am working at CMU Sphinx.
CMU Sphinx is a great open source toolkit that uses speech recognition. The idea of controlling a device with your own voice is pretty amazing! I was very excited to find out that I will work on this project.
The process behind voice recognition is quite complex and you need time to get familiar with lots of new concept. In the first two weeks we had to read the documentation and understand the code, little by little.
An important step before starting to code is to create a setup, where you can test all your file modifications. You will write code easier and prevent most of the bugs. I’ve created a setup that gets us accuracy numbers, the adaptation matrix and other important information. We work with a lot of data, especially sound records. So, I wrote it some bash scripts that make the setup easier to use.
Now, I am ready to move to the next step: writing the actual code of the new feature!
This is Razvan Madalin MATEI. There are 5 weeks by now since I have been coding for Wyliodrin. It seems like it’s time for me to write down some reviews.
The most important thing that happened since I got involved in this projects is this nice collaboration between us, also known as the interns (me and Andrei), and the mentors (Ioana and Alex). Me and Andrei always consult each other before starting something, and Ioana and Alex are always there for us when in trouble.
The second most important thing is that I really learned a _lot_ of things this summer. I am responsible for adapting the wiring library for the Beaglebone Black. As an outlandish from the embedded world, I got a pretty harsh time configuring this board and I am pretty proud that I have not burnt even a single led. Yet.
I also did some coding. Till now I implemented the Digital I/O and Time functions. Now I am working on the PWM stuff and Analog I/O. I’ve tried to adapt libmraa to work on the Beaglebone, but the pins are configured and multiplexed differently, there are different table pins hearders, different facilities offered by the kernel. So I took courage and started the implementation from sysfs.
Since I started working on the Wyliodrin project I have kept a task oriented journal and a work stats spreadsheet. Analysing these documents I found out that I am most productive on Thursdays. I advise every intern to do so as both interns and mentors can keep track of work.
I also took initiative and started the Wyliodrin Coding Style Convention. Me and Andrei are constantly updating this document with guidelines for a homogene library. This is our legacy for future interns and coders on libwyliodrin.
Th-th-th-that’s all folks!
My name is Bogdan Petcu and I am working within this year’s RSoC program at CMU Sphinx.
CMU Sphinx is a toolkit used for building applications that use speech recognition. Our aim for this summer is to implement a module in Sphinx4 that adapts the data that is used for decoding so that recognition process has better results.
For decoding with Sphinx4 you could use a general acoustic model (e.g for English language), but if you want to decode audio files that contains speech of non-native speakers, or if the recording environment has background noise you would want to adapt this general acoustic model for those particular speakers so that the decoding process is more precise. Currently adapting an acoustic model requires using the sphinxtrain tool provided by CMU Sphinx. Using sphinxtrain requires building manually some files ( recordings, their transcription etc.).
Our aim is to make Sphinx4 adapt the acoustic model by itself using information from the first decoding and after that redecode with the adapted model improving the recognition process.
Until now I implemented a component that collects adaptation data and another component that based on the collected adaptation data builds a specific file containing the transformation that will be applied to the acoustic model in order to adapt it.
Github repository: https://github.com/bogdanpetcu/sphinx4
My name is Andrei Dinu and this summer I am working at Wyliodrin, which is a service that allows passionate people to program their embedded devices remotly using a browser or visual programming. 
Since now, Wyliodrin supports only RaspberryPi and Arduino Galileo boards. My main goal these months is to extend libwyliodrin in order to be functional on the UDOO board. Also, I would like to develop some new features for the RaspberryPi board.
I spent the first two weeks heavily documenting on the topic of RaspberryPi, UDOO and a professional tool designed to build, test and package software. I did not know almost anything about embedded systems, but I was enthusiastic. I installed all the required libraries, setup both boards, tried to understand the code and found a few bugs that I managed to solve.
At the beggining of the last two weeks, I made a script that indicates which version of RPi you have and tried to implement some new functions. I left for the moment RPi and now I am taking care of UDOO. I designed the pin table associated with the board and implemented almost all the gpio configuration functions. Tested them, too.
I am currently working on the wiring library . I coded the Digital I/O part. Next step is the Analog I/O part which is different from other boards and a little bit tricky. You can follow my work on github, udoo branch. 
Stay tunned for the next post!
VMChecker will have a new look based on Meteor.js
So far I reimplemented the vmchecker interface to the point of the last interface and added the ability to download your last submission from server ( it needs polishing ).
- Reimplemented Site using Meteor.js and Node.js
- Elements of the site are now rearanged
- Last Submision can now be downloaded
- Solved Some Bugs