#5 World of USO – Code Refactoring

Hi again,

Over the past two weeks I focused on refactoring views that were using a workaround for passing success and error messages to the next view. They were rendering the template with two additional context variables (‘message’ and ‘error’), leaving the template responsible for displaying those messages.

However, Django provides an easy way of achieving such functionality, through its django.contrib.messages module. After a quick scan of the code base I have found a function called ‘do_result’ in the challenge module, which was responsible for creating and passing those two extra variables to a certain template. Alex encouraged me to delete it and use the Django messages framework followed by a redirect to the challenges’ homepage, whenever the ‘do_result’ function was called.

While I was working at refactoring a view from the magic module which did not use the messages framework, I stumbled upon a weird issue which needs further investigation. I tried to turn some points into gold using the exchange feature. Unfortunately, after hitting the ‘exchange’ button, I ended up with a negative amount of gold.

I have also improved the social login feature by making it pluggable. It is as easy as setting SOCIAL_AUTH_ENABLED to ‘True’ or ‘False in settings.py to activate or deactivate social login. The tricky part was that I didn’t know how I could access a variable from settings.py in the templates. The solution was configuring an existent context processor to pass the needed value to the templates.

Don’t forget to check out this blog for more posts about this project!

Mozilla Firefox #5


For the past weeks I’ve been taking advantage of the Networking Dashboard integration in Firefox and I’ve fixed some bugs in the graphical user interface.

In the first week after the last evaluation we received a mail from the module owner with some suggestions about the GUI. He wanted for us to add some JavaScript to make our table’s data sortable by clicked column header. I stepped in and took this bug so I came out with a simple solution, not the most efficient, but I think is the most suitable for our situation:

A listener on the table headers gives me the index of the clicked column in the table, I take the table rows, put them into an array and using the JavaScript Array.sort() method, along with a particular comparison callback, the table becomes sorted by the clicked column.

This method is not that efficient because it takes the already rendered table, sorts it and renders it again (is the best solution when we sort an already rendered table, but when we want to keep the sorting order between table refreshes?) . Rendering a table is pretty expensive so my reviewer advised me to sort the data before first render, thus only a sort and a render operations will take place when refreshing.

This was a little bit trickier because I had to sort in parallel some arrays stored in a JS object. I figured out a solution would be to sort the array corresponding to the sorting column and, with a special comparison callback for the sort function, cache the results of the comparisons. The others array in the object will be sorted with a comparison callback which only returns the cached results. It works great, but there are some problems which make me wonder if it’s worthed, now I’m waiting for feedback.

Another bug I filed focuses on the refreshing feature. Initially, the refresh button and the auto-refresh checkbox request the new data for all the existing tabs. This problem was causing a lack of performance, especially with the auto-refresh feature, so I fixed it. Valentin came with a very good idea of letting the refresh button requests data for all the tabs, in case one wants to make a snapshot of all data on a specific moment of time, and the auto-refresh checkbox request data only for the active tab. It’s done and landed in trunk.

Between these bugs I discovered a crash in the dashboard’s menu, it was leaked by me when I helped Valentin with the integration:D, it’s now fixed.

Our next goal is to land some tests for the dashboard. Those were some fun weeks, see you next post!

Fortnightly Post #4.7: Long time, no post

Hi, there! It has been a while since I last posted. Time has swiftly passed and there were notable events galore. I have enjoyed my spare time that I planned from the start and now it’s time to get back to work.

Last time I talked about the “blueprints” for the tag page. Now, it is almost complete, but unfortunately, we might give it up. Why you may ask, well it’s because we haven’t yet decided which format the images will be. It depends on those that “draw” (better said, create) them. They might be svg or a section of a 3D model, in which case the drawer will also be the one to tag them. I am quite happy with my tag page as I learnt a gamut of technologies like: JavaScript, JSON, AJAX, (better) PHP, using plugins like jCrop and Select2.

Because the updates for the tag page have stalled, I now have to focus on the presentation part. I have to create a gallery for the forthcoming images and I have one plugin in my mind but it first needs approval. Till then…

Happy birthday DEX Online!

Mozilla Firefox – The Networking Dashboard. Week 6 and 7


The Networking Dashboard has been finally included in Mozilla Core code base. In order to see it, you will have to get Firefox Nightly, but I would recommend patience. The product is far from being final. We still have a lot of work to do. We are pleased because this has finally happened and also for the support that we already see it in people which are reporting bugs (not many though :) ).

So in this two weeks I haven’t been able to continue my work on Proxy Settings Test Diagnostic Tool because apparently Patrick (owner of networking module) had a lot of work to do and we were waiting for his review in order to know what I should modify or if my work is good so far.

I’ve started to work on logging bug, but after a few days me and Valentin realised that it is more complicated than we had expected. Also we found out that there are a few developers that are already working at something similar. I will get in touch with them and see if I can help them with something (I’d love to).

I have continued working at some UI futures and I’ve also got prepared for mid term.

About our meeting at ROSEdu – well, what can I say? it was a lot of fun. We were pleased with our presentation and the game of bowling afterwards.

Not a lot had happened in this two weeks but I’m glad that I’ve been able to get a little break.

See you next post!

#4 Teamshare – Peer-to-Peer Streaming Peer Protocol


In the seventh week I continued writing unit tests for my team configuration generator. The unit tests are now covering a large part of the functionality of the two generators.

At my mentor’s suggestion I started learning about the protocol that Teamshare is going to use for data transfers, Peer-to-Peer Streaming Peer Protocol (PPSPP). I will briefly introduce the protocol in the remainder of the post.

PPSPP is a protocol for disseminating the same content to a group of interested parties in a streaming fashion. The protocol supports both pre-recorded and live data transfer. In contrast to other peer-to-peer protocols, it has been designed to provide shorter time-till-playback, and to prevent disruption of the streams by malicious peers. In my opinion, the most interesting parts of PPSPP are the chunk addressing schemes and the content integrity protection.

Regarding the chunk addressing schemes, PPSPP uses start-end ranges and bin numbers. As the name suggests, the start-end range identifies chunks by the specification of the beginning and ending chunk. The bin numbers is a novel method of addressing chunks in which a binary interval of data is addressed by a single integer. This reduces the amount of data to be recorded by every peer.

For content integrity protection, PPSPP uses the Merkle Hash Tree scheme for static transfers, and an Unified Merkle Hash Tree scheme which adds a public key for verification. The content is identified by a single cryptographic hash, the root hash of a Merkle hash tree, calculated recursively from the content.In contrast with BitTorrent, which needs all the chunk hashes before it can start the download, PPSPP needs only a part of them, which leads to a limited overhead, especially for small sized chunks.


For more details, feel free to read the IETF draft at this webpage http://tools.ietf.org/html/draft-ietf-ppsp-peer-protocol-07.

#7 DexOnline – Romanian Literature Crawler


Sorry I forgot to provide you with a link to my work:


Last week I forgot to post so I’ll state my progress here: I learned how to use the Smarty library, with whom I built a functional crawlerLog page with which you can see the Crawler progress on your computer or smartphone.

This week I used ajax on the crawlerLog web page to refresh its information every 5 seconds and I fixed the www.romlit.ro problem with broken HTML at a general level ( I’m repairing the broken html by using simple_html_dom, removing styles and scripts and adding body tags where there are none)  so I don’t have to use a different HTML parser for romlit. I also improved the Crawler by adding fixtures like crawling a certain area of the site and abstracting the database query layer for faster technology change (e.g. mysql is not very scalable with the amount of data we continue to gather so we may turn to pl/sql)

#4 World of USO – Social Login


I’m back to work, after a seaside vacation.

My current task is to make possible for the user to login through various social networks, such as Facebook, Twitter, Google. This is quite important because we might run World of USO in another context and users would be more likely to try our game if they had the possibility to login with an existing social account.

I started reading about the OAuth protocol and how the login mechanism works. I learned that you have to follow a bunch of steps before you are granted permission to access the user’s data. First, you register your app with the desired social network to get a unique ID. After that you make a GET request to their servers with some parameters (app_id, redirect_uri). They give you back a code (if the user authorizes your app) which you are going to exchange for an access token. Eventually, you use that access token to get the data you need using their API.

I was able to implement that routine myself for Facebook, after reading their documentation. But there are some pitfalls regarding user creation. Therefore, Alex and I decided to use a tested and well-known mechanism among Django users. It is called django-social-auth and it does exactly what we need.

I managed to integrate django-social-auth with World of USO. Now users are able to log in with Facebook and Twitter. It raised a weird exception when trying to authenticate with Google but I think it can be fixed. I am now waiting for Alex’s review and further instructions.

The thing I enjoyed most about working on the social login was that I got to talk with the man who wrote django-social-auth. I was confused about how the mechanism was authenticating its users, so I decided to send a mail to its creator. He responded very fast and was patient with me. That’s why I love the open source community!

Below is a screenshot with the newly added feature.

Stay tuned!


FinTP Application GUI #3

GitHub Repository – FinTP Configuration Wizzard


On this third post for RSoC program, I will present you the user interface i am working at for FinTP project. If you read my last post you should know by now that in order to configurate FinTP you have to write XML configuration files for all the connectors parts of the application.

Here is an example of a possible XML file for a particular connector.

There are some mappings for the interface, every sectionGroup in XML goes to a separate tab in UI and all its child tags goes on tab’s page as elements which can be labels, fields, drop-down menus, etc.

The purpose of this application is to read the Xml file and create the interface for it. You can then modify fields and update the current Xml or write a new one.

Until now this is how it looks. I’m using QDomDocument which is a DOM parser, while it parse the xml file it populates the UI with Qt widgets. Depending on a tag’s name, its attributes or inner text can become combo-boxes, line edits or labels.

I have also added a menu for this interface where user can open another xml file, can save the interface into a new one or it can update the existing file. These function are still under work at this time, I have to learn more about Qt signals and slots mechanism.
Until next time I will try to fix them and add some new functionalities that my mentor told me, one of them is to use XSLT files to transform our xml files into something else.

#4 Mozilla Firefox


The last two weeks I worked on some Telemetry bugs. For the first one I had to report whether or not the DoNotTrack privacy feature was used and if it was, which option was selected by the user.

I started with a patch which was reporting the specified data even if the user was toggling between the options, I sent it and asked for feedback, but I was very sceptical about its behavior, the way it was approximating user’s choices and I decided to search for a way to report that data once-per-sesion. It wasn’t that difficult, I called the telemetry stuff in the nsHttpHandler destructor, but we were not sure if  the data would be actually reported because the same time HTTP was shutting down, so I put some breakpoints and I saw the destructor was called in the right time. The patch landed a few days ago and I hope the numbers will help the DNT developers.

After that, I started working on another bug, it was supposed to report the HTTP connections utilizations. I had begun with the first two tasks, how often a backup connection is created and how often this backup is never used, then looking through the code I realized there was a lot of work, a lot of new concepts…therefore I stuck right in the middle of it. I lost a lot of time understanding the code, the algorithms used there were not documented anywhere but some comments. I am glad I did that because, with the help of my mentor and the community, I learned a lot of new things and some great strategies, one of which I will present you next, it’s called “happy eyeballs”.

“Happy eyeballs” it’s an algorithm which can support dual-stack applications, both ipv4 and ipv6 are supported. Firefox does not implement the classic strategy, it has some small changes which I managed to understand from bugzilla discussions and code comments. In a simplified version, a primary connection is created with the first preferred version of IP, a 250ms timer starts. If the connection establishes before the timer expires, then no backup is created, else a backup connection is born with an IPv4 address, for everyone of them input/output streams are attached. After the backup is created, Firefox listens for an output stream “ready” event, thus the connection with the first stream ready is used.

That’s how my last two weeks went, tomorrow I will try finish this last one bug then we will continue working on Networking Dashboard, maybe we will write some unit tests.