Planet CDOT

January 30, 2015

Kenny Nguyen

Brackets iFrameSecondPane Extension Issues

I shall be refering to this:

The extension creates an iFrame next to the current code mirror instance in brackets

Current Issue:

Presently, the way my extension works is that it hijacks on of brackets built in functions.

It loads SplitVerticalView to create 2 code mirror instances back to back:

Then I hijack the second code mirror instance:

I hijack it by erasing the contents of the second pane, and then injecting my html below:

<div id="second-pane" class="view-pane active-pane" style="height: 100%; width: 100%; float: none;">
    <div class="pane-header"><em>Preview Panel</em>
    <iframe src="extensions/default/iFrameSecondPane/2panecontent.html" id="preview-pane" class="pane-content" style="width:100%;height:100%;">

My issue comes in when I try to load this in either locally, or in the browser.

When I load the html 2panecontent.html brackets assumes I'm in the SRC folder. This is an issue, the reason this is an isseu is because of where Brackets itself keeps the Extension folder.

In the web it holds extension within the SRC folder:


But on mac the default extension folder lies in:


if we jump into the brackets application itself and install the application as default in theory it should work if we add the extension to


by Kenny Nguyen at January 30, 2015 06:33 PM

January 28, 2015

Maxwell LeFevre

Bash Scripts and Terminal Setup

I think that we will be doing a lot of benchmarking this term so I decided to write a couple of scripts to make it easier for me. I placed them in a folder called .myScripts on both Red and Australia and tweaked my .bash_profile‘s to have aliases for them.

alias sudo=’sudo ‘
alias benchmark=’/home/mlefevre/.myScripts/ ‘
alias benchmark=’/home/mlefevre/.myScripts/ ‘

I also had to add an alias for sudo that has a space on the end because by default only the first command is checked to see if it is an alias.

The following script generates a profile for the machine:

#! /bin/bash
echo -e “\nHardware Profile (lshw): ” ; lshw
echo -e “\nMemory Useage (free): ” ; free
echo -e “\nCpu Details (cat /proc/cpuinfo): ” ; cat /proc/cpuinfo
echo -e “\nOS Info (cat /etc/*release*): ” ; cat /etc/*release*
echo -e “\nInstalled Packages (rpm -qa): ” ; rpm -qa
echo -e “\nStorage Info (hdinfo): ” ; hdinfo
echo -e “\nRAID/md state (cat /proc/mdstat): ” ; cat /proc/mdstat
echo -e “\nPhysical Volume Info (pvs): ” ; pvs
echo -e “\nVolume Groups (vgs): ” ; vgs
echo -e “\nLogical Volumes (lvs): ” ; lvs
echo -e “\nUsers Logged in (who): ” ; who

The second script calls the profiling script and then runs the specified command a user defined number of times.

#! /bin/bash
echo “System Profile: ”

echo -e “\nBenchmark Results: ”
echo -e “Iterations: $1 \nCommand: $2 \nArgument: $3″
for ((i=1;i<=$1;i+=1))
echo “Test $i”
eval $2 $3

I am able to use these scripts from anywhere by typing ‘benchmark <num of iterations> <command> <command arg>’ e.g. ‘sudo benchmark php bench.php’

The last thing I did was set up my personal environment so that I can more easily differentiate between which terminal window is for a specific machine. I added the following lines to my personal .bashrc file:

function tabc {
if [ -z “$NAME” ]; then
osascript -e “tell application \”Terminal\” to set current settings of front window to settings set \”$NAME\””

function ssh {
if [ “$@” = “red” ]; then
tabc “Red Sands”
elif [ “$@” = “aussie” ]; then
tabc “Solarized Light”
/usr/bin/ssh “$@”
tabc “Basic”

The result of these lines is that when I ssh to a machine my color scheme changes. The first function, tabc, uses AppleScript to change the colour scheme of the terminal window to the parameter passed to it. The second function captures when the user tries to make an ssh connection. It reads the the name of the connection and compares it to the defined connections of red and aussie. If it finds a match it changes the colour scheme to the one defined, ‘Red Sands’ for red and ‘Solarized Light’ for aussie. Then it passes the users ssh request to /usr/bin/ssh. When ssh exits it switches the colours back to the default scheme.

Since making these changes I have found it noticeably easier/faster to work over ssh with these systems. The tabc function won’t work on non OS X operating systems because it uses AppleScript but there are similar capabilities that can be found in most other bash environments. This is not a mandatory post for SPO600 but the ideas contained within might be helpful to others so I have decided to share it anyways.

by maxwelllefevre at January 28, 2015 04:50 PM

Neil Guzman

CDOT demo #3

Yesterday's demo #3 was an intro to Cordova by Hosung Hwang. I didn't know it was possible to create an android application using HTML5 and JavaScript. It sounds like a great way for web developers to get into the android market. Speaking about android, our android team members are doing a great job with creating the client applications. The server side also seems to be on track with handling messages received by the client and sending the appropriate messages.

We managed to emulate one of the client applications following a route while the server calculated the client's velocity. The server side was considering on making a little minimap or radar for debugging purposes using PyGame or pyglet to draw the maps, but it's not our current priority at the moment. It would, however, look really kool and make what we are doing, easier to visualize. For now, we decided to work on making the calculations on the server work as soon as possible, and then start on other stuff.

Maybe next week I can start adding some short Python stuff I have learned or think is kool.

by nbguzman at January 28, 2015 04:44 PM

Kenny Nguyen

Brackets Run-through Week One

Alright, redoing this blogpost.

The general nature of this blog post will cover brackets.js and what various modules do. But first I'll need to cover in what order the program runs in order to get to this step.

  1. Index.html -> loads neccessary plugins synchronously, then loads main.js
  2. Main.js -> checks for errors, then calls require to run Compatibility in relation to browser compatibility, and then finally launches brackets.js
  3. Within brackets I'm slowly checking to see what extensions we won't be needing in our implementation


  1. This modules function is to query brackets server to check if theres an update available for brackets the program itself and pop up a window for the user to download the update
  2. Obviously with our current use case scenerio for brackets within a iframe/div this is a function we don't want interfering with the user experience
  3. To fully remove this module we need to remove references of it within the following files:
    • brackets.js
      • remove require, remove it from test, take out reference in appinit
    • HelpCommandHandlers.js
      • under the help menu displayed at the top the user is currently able to manually initalize an update, this functionality should be removed
    • UnitTestSuite.js
      • refers to UpdateNotification-test.js, should be removed if we are removing UpdateNotification-test.js's functionality
    • brackets-concat.js
      • appears to deal with creating the pop up window, references to updatenotification fucntionality appaer to be removable seeing as how we are removing it
    • UpdateNotification-test.js
      • appears to be a test function, no real functionality lost from removing it


  • KeybindingManagers purpose is to setup keys you can use to issue out commands such as hide side panel, and
  • Error arises within the same file, on line 192

    KeyBindingManager : KeyBindingManager, this relates to the when brackets.js does tests on whether extensions have been loaded.

  • cavat, although removable, keybindingmanager is used in many different files, so numerous that it might be better off to just leave it in


  • Handles events where the user would drag a file into the code mirror instance
  • presently removable seeing as how it has not yet been implemented in the main brackets code for browser-appshell

by Kenny Nguyen at January 28, 2015 04:38 PM

Artem Luzyanin

Lab 3 or “profiling unknown”

For the third lab of SPO600 we had to do profiling for a piece of software. A lot was learned from the previous lab, so a lot of mistakes were avoided. We have picked a new piece of software to work, as SQL server was a bit too complicated, and the tests on it were running for way too long. This time around we went for Python package. Building the software with “-pg” option for the profiler was complicated, as we had to edit the “Makefile”, adding “-pg” option under “CC=” and under “BASECFLAGS=”. Then it was the matter of building it (took a third of the time it took SQL to build), and running the script that triggered all benchmarking tests.
We have run two tests, one on each of the two available servers. Due to the difference in the server configurations and parameters (which we are not allowed to disclose the specs for), the results of profiling were substantially different.
Profiling, done on Australia server, led to the next result (I know that we were told not to use the screenshots, but it is hard to describe 3 mb .png file in words):

Profiling on AU

Profiling on AU

As we can see, there are a lot of methods, starting from Main, that are executed only once and doing virtually nothing. Also, that the process that takes care of switching between threads is taking up a notable percentage of CPU. The process, that has the highest cpu consumption to amount of runs ration seems to be “PyEval_EvalFrameEx”, that takes up 14.5% of the total time, executing only around 65k times. It seems that this block could be a good target for improving the performance.
Profiling, done on Red server, yielded somewhat different results:

Profiling on Red

Profiling on Red

As we can see, “PyEval_EvalFrameEx” was run the same amount of times, but it takes up only 12.69%. Also, on Red server, this process had a lot more done by itself and its children (23.9% on Australia VS 55.47% on Red). Such different behaviour of the same tests on two machines shows a certain need for code optimisation.
Overall, I am glad that this lab helped me to learn how to look for a problematic process that needs to be optimised. Next step is to learn what and how needs to be done to perfect those blocks.

by lisyonok85 at January 28, 2015 03:52 PM

James Boyer

Contributing to open source projects

Lab 1, Contributing to open source projects
I have chose 3  projects wmii, Go, and plan9. The lab suggested 2 projects but I did three because
wmii is very small and I couldn't really get that much content about it, but I still wanted to advertise it because I think it's a ton of fun.
Wmii is a small, simple dynamic tiling window manager that borrows ideas & aesthetic from the plan 9 operating system, specifically the acme text editor.
Plan9 is a very interesting operating system that was developed shortly after UNIX by the same team(Pike, Thompson, Richie). They were essentially given free reign to create this OS and they came up with something that is very unique and is full of very interesting ideas. It has been open-source since 2002.
Go is a new programming language being developed at Google by prominent Software engineers namely Ken Thompson(UNIX, B, UTF-8) and Rob Pike(UTF-8, Plan9) with help from the open source community.


Go is distributed under a BSD style patent grant license. License | Patent grant
To start contributing to the Go Language you don't start with coding, Their site suggests you discuss your idea with other members of the open source community via their mailing list. This helps you to make sure someone else isn't doing it and that you can verify that it is a good idea so you don't waste your time. Go uses github extensively for all their issue tracking and source code. As you can see here, as well as the language itself you can contribute to various aspects of Go, such as networking libraries, the compiler, mobile libraries, cryptography libraries and much more, there is plenty to do! All code must be reviewed, they use a custom git command called git codereview, this is provides easy commands when working with git and the Gerrit code review system that Go uses. Once you have made your change you mail it to be reviewed using the 'git codereview mail' command. You may receive comments from the reviewer, you can then modify your code accordingly and use the mail command again to resubmit your new code, it continues like this until you receive a comment saying Looks good to me or LGTM. When your code has been approved you can now sync(git codereview sync) and then submit your code to the master branch(git codereview submit).
More info: Contribute to Go


Wmii is distributed under the MIT license.
Wmii uses google code to host their source, track issues and submit issues. They also use the Mercurial version control system for their repositorys. Since it is a relatively small project simply cloning from Mercurial making edit's and committing should allow one of the project member's(there are only 4) to see your code and either accept or deny the patch. You can keep track of issues and keep in touch with the project members here. I apologize but I just could not find that much information about submitting patches and such. I suggest you give it a try, it's fun and pretty simple once you go through the user guide and maybe if you start using it you can find some bugs that need fixing.


Plan 9 is distributed under a dual license GNU GPLv2 | Lucent license
I thought plan9 would be a interesting deviation from the usual projects that upstream through git or mercurial or other repositories. Since it is a operating system, you will have to be running it. Image files can be found here and info about Installing it can be found here. Plan9 has a file server called sources as a host for their sources repository, you can browse through it on the web but it seemed to be a bit buggy. In plan 9 you can simply mount that server on your directory using '9fs sources' and browse through it as if it's local through the path which should be /n/sources/. Similarly to Go and I think this applies to all open source projects, discuss with your idea with other people, or if you don't have an idea you can ask for suggestions(mailing list info). Once you have some code to post you use the command in plan9 simply called patch. They set guidelines for you which basically say that you should explain your patch/bug fix/ update clearly, Follow style guidelines and update man pages when necessary. Once you submit with the patch command you can receive 2 messages: 'Sorry' or 'Applied'. If you receive 'Sorry' they will tell you why and what things you can change to fix it and if you receive 'Applied' then you've done well and the patch has been accepted.
More info: how to contribute

In conclusion I think with some small projects you might have to e-mail the project member's directly or chat on IRC to find a clear path to contributing. Concerning bigger projects, most of them really want contributor's so they have clear explanations posted to guide you through contributing, so I suppose I'm just a Stenographer, for now.

Thanks for reading.

by James Boyer ( at January 28, 2015 06:52 AM

Maxwell LeFevre

Lines of Communication

This post is going to be a little out of order chronologically because it is about the original setup of the various lines of communication between myself, CDOT (Seneca’s Centre for Development of Open Technology), and the open-source community, including this blog.

CDOT Registration


Registration with CDOT was a multistage process that, in the end, provided us with edit privileges on the CDOT wiki, a profile, and the ability to add our blog to the CDOT Planet. Below I have outlined the required steps to get started with CDOT.

  1. Send an email to from your Seneca email account to requesting a username and password and wait for a response.
  2. Use the received username and password to log in at and change your password.
  3. Click on your name in the upper right corner of the page and use the ‘edit’ tab to modify your profile. When done editing you’ll have to click ‘save’, answer a math question that appears at the top of the page, then click save again.
  4. Go to and add your information to the table (use the edit tab at the top, not the edit button by the table). If there is anything you don’t know just leave it blank. You can come back to it later.

That’s all you have to do for setup on the CDOT wiki. The registration steps (1-3) are not necessary if you have registered with them in the past.


For my blog I chose to use WordPress and their free hosting option. Setting up WordPress was relatively hassle free. Just go to, scroll to the bottom and click on ‘Create Website’. The next two pages will prompted to enter information for account setup and I am not going to talk about them because we’ve all done that before. Step 2 or 4 will try to sell you a custom address, note the ‘no thanks’ button on the bottom right. Some of the themes in step 3 of 4 also have costs associated with them.

When you are done with account creation it will take you to your ‘My Sites’ home page. Don’t be tempted to work from this page, it uses what is referred to as the ‘beep beep boop’ editor and this is not what you want to use. This editor only saves drafts in your browser cache and they can be easy to lose if you don’t publish them right away. This editor is also buggy and inconsistent. Instead select the link calls ‘WP Admin’ from the navigation bar on the left. This takes you to your ‘Dashboard’, where, by selecting ‘Posts’ on the left, you can use the old editor to create new blog entries without the risk of losing them. Just remember to push the save button on the top right while working.

CDOT Planet

Once you have created your blog, setting up CDOT Planet to read its feed is fairly straight forward. Got to and edit the page. Add your blog to the list using the right format for the blog site you selected. For example WordPress is:
The CDOT Planet feed will pick up any new content you post automatically and add it to

IRC (Internet Relay Chat)

The final piece of communication to set up was IRC. I use OS X and decided to go with IRC client software instead of using a browser based solution. After looking at a few options I decided that was my best option because it is able to connect to multiple servers and merge them into a single chat window. It can also be configured to remember and connect to all the servers/channels you want when you launch it and authenticate your registered nick.

It is important to make sure you register your nick on the various servers that you plan on using so that no one can spoof being you. To register a nick do the following:

  1. Pick a nick and log in to the server with it.
  2. type: /msg NickServ REGISTER password
  3. use the information you got in the email to verify: /msg NickServ VERIFY REGISTER nick password

Each time you connect to a server you have to sign in with your nick and password.

At this point I went back and updated the table at with my blog information and my IRC nick. I also updated my profile on the wiki to reflect this information.

by maxwelllefevre at January 28, 2015 06:02 AM

Yan Song

Benchmarking, Apocryphally, or Is Benchmarking Good Timing?

For the benchmarking lab, our choice is PHP 5.6.5. The installation is, to some extent, straightforward. Nevertheless, it’s still interesting to know how to achieve that without leaving the terminal window, even if one’s working in a desktop environment. In fact, a stream-lined configure / make / make install process would trigger make test to report back several testing failures.

For a later comer, it’s bewildering to figure out how to do the timing. Fortunately, there’re some Performance Considerations out there. We borrowed the second example from that document and wrote a simple bash script to extract the desired “user” CPU times. On our red server, a testing run of our bash script for 100 invocations of that example PHP script with the zend.enable_gc setting turned on gave us the following frequency table:

Frequency “user” CPU Time
40 3.49
31 3.5
13 3.48
8 3.51
3 3.47
2 3.53
1 3.63
1 3.6
1 3.52

It’s easy to see that for the example code, the mode running time is about 3.49 s. But the question is: How do we interpret this kind of timing, or how good is it?

by ysong55 at January 28, 2015 04:34 AM

Klever Loza Vega

Extension Update

During week 3 our team worked on finishing up our Brackets extension. It is now called HTMLHinter. We improved from our previous design by implementing the following features:

  • The error message now only shows if the button or the line number with the error is clicked
  • The error message now has styling
  • When an error exists, the mouse cursor changes to ‘pointer‘ type when over the gutter
  • When an error exists and the mouse is over the gutter, a message appears telling the user to click the button for more information

Screen Shot Final Brackets Extension

Overall, I learned a lot working on my first project at CDOT. For the time being, we are done working on this extension and are moving on to other Bracket projects.

by Klever Loza Vega at January 28, 2015 02:00 AM

January 27, 2015

Alfred Tsang

PHP Analysis and experience learned

It is really challenging to profile PHP.  The source code in the make file is very difficult to interpret.  Commands are hard to understand like exit in the make file.  When debugging it, errors often occurs, such as segmentation faults.

Results indicate that the make file in PHP is hard to read and understand since it contains code in PHP.  This code needs careful editing since a minor mistake will result in a fault.

I find this lab to be really fun as it really makes me learn new stuff.

by kaputsky263 at January 27, 2015 02:19 PM

Hong Zhan Huang

SPO600: The second lab… Benchmarking a LAM(P)

Having claimed ssh keys since the last post, I have now have access to two new environments: Red and Australia. With these two new things in hand it’s time to proceed to Lab 2 which involves the benchmarking of some piece of the LAMP stack. In class during the lab my group decided to benchmark PHP on the Red system.

In order to do that however first we needed to obtain, configure and install PHP onto Red. Then we needed to pick some aspect of PHP to actually benchmark. The steps we took to do so are as follows:

1. Obtain the source code

We did this through the terminal with the use of the wget tool:

Wget –O php-5.6.4.tar.gz

Following this we decompressed and open up the package with: tar –xvf php-5.6.4.tar.gz

2.       Configure and build To have the code properly built we must first configure it. After that all that’s needed is to make it. Barring the need to make any changes to the configuration or Makefile, the method to do so looks like this:

… <- configure doing its thing

It should be noted that the make command can make use of multithreading through the –j option to help speed up the process.

3.       The next step is to ascertain whether everything was built properly we can use the make test command to see if everything was built correctly. The results are as follows:

TEST RESULT SUMMARY———————————————————————
Exts skipped   :   52
Exts tested     :   27
Number of tests : 13449             8908
Tests skipped   : 4541 ( 33.8%) ——–
Tests warned   :   0 ( 0.0%) ( 0.0%)
Tests failed   :   4 ( 0.0%) ( 0.0%)Expected fail   :   31 ( 0.2%) ( 0.3%)
Tests passed   : 8873 ( 66.0%) ( 99.6%)———————————————————————
Time taken     : 256 seconds=====================================================================

Well that doesn’t look quite right! But for now let’s move onward.

4.       Now that PHP is properly? Installed on the system we can start benchmarking it. Or at least some of it. During the lab class my group found and settled on using this PHP script to benchmark some of the features of PHP.

The script itself performs some 140000 math functions, 130000 string manipulations, 19000000 iterations for/while loops and tops it all off with 9000000 if/else statements. It does all these while keeping track of how much time was needed to perform these many tasks.

5.       Before we get to the actual testing, let’s go about listing some of the testing environment’s basic operating system, CPU and memory details of both Red and Australia. I repeated the benchmarking process for Australia so I could see if the differences of each platform would have much bearing on the testing.


OS Info (cat /etc/*release*):
Fedora release 21 (Twenty One)
VERSION=”21 (Twenty One)”
PRETTY_NAME=”Fedora 21 (Twenty One)”
Fedora release 21 (Twenty One)
Fedora release 21 (Twenty One)

Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 4

Memory Usage:
total used free shared buff/cache available
Mem: 16716032 275840 6535424 10048 9904768 16235776
Swap: 0 0 0


Fedora release 21 (Twenty One)
VERSION=”21 (Twenty One)”
PRETTY_NAME=”Fedora 21 (Twenty One)”
Fedora release 21 (Twenty One)
Fedora release 21 (Twenty One)

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Model name: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
Stepping: 11
CPU MHz: 1596.000
CPU max MHz: 2394.0000
CPU min MHz: 1596.0000
BogoMIPS: 4799.79
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0-3

total used free shared buff/cache available
Mem: 8067408 337264 574396 748 7155748 7424404
Swap: 0 0 0

These are just a few of the details one can look up and there are many other factors of a system that could play into why a benchmark’s result is what it is. The following will be a brief documentation of some useful linux/unix utilities to find information about your system’s environment.

  • lshw– list hardware is a tool used to extract detailed information on the hardware configuration of the machine
  • lscpu – gathers CPU architecture information from sysfs and /proc/cpuinfo
  • free – displays amount of free space and used memory on the system
  • Cat /etc/*release* – will show information regarding what release of an operating system is the machine currently using
  • Rpm –qa – will query the RPM package manager for all the packages that have been installed on the system

6. Now for the actual results of the testing, I’ll just put down the first three results for each machine to keep this from being too lengthy:


Test 1

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 22:18:53
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 2.311 sec.
test_stringmanipulation   : 2.545 sec.
test_loops                : 1.853 sec.
test_ifelse               : 1.329 sec.
Total time:               : 8.038 sec.

Test 2

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 22:19:01
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 2.312 sec.
test_stringmanipulation   : 2.515 sec.
test_loops                : 1.853 sec.
test_ifelse               : 1.334 sec.
Total time:               : 8.014 sec.

Test 3

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 22:19:09
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 2.327 sec.
test_stringmanipulation   : 2.527 sec.
test_loops                : 1.853 sec.
test_ifelse               : 1.332 sec.
Total time:               : 8.039 sec.

The largest time difference between tests here are 8.039s and 8.014s is about 0.3% which I’d say is a fairly stable result.


Test 1

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-26 04:36:35
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 1.551 sec.
test_stringmanipulation   : 1.664 sec.
test_loops                : 1.176 sec.
test_ifelse               : 1.105 sec.
Total time:               : 5.496 sec.

Test 2

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-26 05:00:06
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 1.526 sec.
test_stringmanipulation   : 1.659 sec.
test_loops                : 1.265 sec.
test_ifelse               : 1.297 sec.
Total time:               : 5.747 sec.

Test 3

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-26 05:00:21
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 1.535 sec.
test_stringmanipulation   : 1.648 sec.
test_loops                : 1.181 sec.
test_ifelse               : 1.292 sec.
Total time:               : 5.656 sec.

The largest time difference between tests here are 5.496s and 5.747s is about 4.4% which is much less stable of a result compared to Red.

7. Looking over the results it’s clear that Australia has the leg up on Red in regards to the amount of time it took to perform the benchmark script. To contrast that however Australia’s results had a higher degree of variance and thus being less reliable as a baseline reference of performance compared to Red.

Things that come to mind after concluding this lab:

  • The sample size isn’t large enough to be considered adequate for testing. Having this benchmark ran many many more times would allow for more accurate reading.
  • We’ve only tested a subset of PHP and these results would only be relevant when talking about that subset of functionality.
  • Only the resource of time was tested here, other aspects such as CPU usage were not included. On the same note if multithreading were to be applied how would the results change.
  • Seeing the make test not fully pass all its tests could be a sign that the PHP installation itself might have faults
  • The environments used were not the most absolute sterile environments

Overall I think what I’ve learned is that benchmarking is rigorous!

If you’ve read this far and are curious how the rest of my group fared with this same benchmark go check out their blogs (to be updated as they make posts):
Hosung Hwang

Quest completed! Obtained experience and more questions that need answers. End log of an SPO600 player until next time~

by hzhuang3 at January 27, 2015 06:42 AM

Hosung Hwang

[SPO600] Code Review Lab – How to contribute to an open source project – php

Lab : SPO600 Code Review Lab
I chose php.

License : PHP license (version 3.01)
Patch process : through mailing list or getting Git account (
Repository : github (
Bug tracking system : PHP Bug Tracking System

Contributing patches
There are two ways to contribute patches.
If the patch is minor and small fix, the contributor should subscribe to internal mailing list. And then the patch can be sent to the mailing list.
I subscribed the mailing list; the order is :
1) send blank email to
2) get automatic answer that contains confirmation code
3) send the confirmation code to
4) Now subscribed

If the patch is major or the contributor plan to contribute regularly, the Git account can be requested through a request form:
Also, to fix bug in the PHP Bug Tracking System, the Git account is necessary.

Development process (RFC)
1) through mailing list, measure reaction to the intended proposal.
2) Create an RFC in – the REF is owned by the creator, the owner has the responsibility to manage this issue
: “In Draft” status
for example, PHP RFC: Remove the date.timezone warning
3) discuss about the RFC through mailing list, get feedback
: “Under Discussion” status
4) When discussion ends, send email with subject [VOTE]. do vote.
: “Voting” status
5) Based on the result -> “Accepted” or “Declined” or go to “Under Discussion”
6) after implementation, update the RFC with github link

After subscribing to the mailing list, I got many emails. Discussing through mailing list looks very effective.
The process : creating RFC, getting feedback, voting, and then starting to develop, seems very organized and effective way to gather information from people who are working in other place/time zone.

by Hosung at January 27, 2015 04:54 AM

Mobile – App installation check 2 – AppURL

In the previous posting, I built a simple web page that determines if an app is installed or not. It works well on iOS and Android. However, for other platform including Blackberry and Tizen, it needs to be figured out.

While I was looking for the way to check app installation in the Blackboard, I found very interesting service called AppURL.

This website provide pages like appcheck.html that I made. And it gets a configuration as a json file. The page is redirected to the app or app store. It supports almost all mobile OS including Windows Phone and Tizen.

For example, taking Evernote,
1) when the user open this page :
2) it gets configuration from :
3) and determines OS and try open the app, if not open app store using this Javascript :

appurl.js is the core. Basically the implementation for Android and iOS was similar to appcheck.html.
I asked about the license by email, and the developer answered that their intention was to make appurl.js available for anyone to use. Free to use. Thanks for their good job and nice intention.
So, we can use service, or we can build our own website by using appurl.js.

According to my test, Evernote didn’t work on Android while it worked on iOS. I guess Evernote for Android changed their url scheme.

Setting URL scheme
Android :
Blackberry 10 :
Firefox OS :
iOS :
Tizen :
Windows Phone 8 :

Another Issue
If every service make their own URI scheme for their app or service, there could be crash. Since registration is overwritten in the system, making it unique will be important in the future.
According to wikipedia page, there is Official IANA-registered URI schemes and Unofficial but common URI scheme. Registering it to IANA could be considered.

Next Step
Making html using appurl.js.
Testing on all possible OS, Maybe using cordova app.

by Hosung at January 27, 2015 12:30 AM

January 26, 2015

Andrew Benner

Bramble — Week 3 January 19-23, 2015

I began the week working on a bug where the error button in the gutter was actually creating two instances; they were overlapping which altered the shape of the button. I resolved the bug so now it’s just the one button. After, I worked on the drop down panel for the error message to include styling so it looks better.

The challenge with the panel was trying to figure out how to add a class to the inline error pane. It actually turned out to be a one line fix which was nice. While testing HTML code syntax I noticed the first error occurrence of an <a> tag with an invalid attribute didn’t highlight until an error is created after the first error. The error button showed, but not the highlighting. After a little investigation, it turned out there was an error with the if statement condition in the highlighting function.

On the Tuesday we were given our next project for the following two weeks. We are trying to embed Brackets in Thimble to replace what currently exists in Thimble. We are naming this new project Bramble. The portion of the project that I’m working on right now is how to turn off some of the user interface sidebar, toolbars, and menus that we don’t want to have. The first thing I had to do was familiarize myself with the Brackets code. While searching for the user interface options we wanted to hide I found out some information about some of the other files.


ProjectManager glues together the project model and file tree view and integrates with other parts of Brackets. It is responsible for creating and updating the project tree when projects are opened and when changes occur to the file tree.


Creates the view of open files on the left of the text editor.


Pane objects host views of files, editors, etc. within the actually text editing area.


Manages the arrangement of all open panes, with a limit of two panes.


Contains all the commands for the menu bar. This was my first lead. Within this file I found the VIEW_HIDE_SIDEBAR command. I needed to find where/how this command is called. This command lead me to the SidebarView.js file.


This file controls the showing and hiding of the sidebar. This file has functions that can toggle, show, and hide the sidebar. I began to make an extension, which calls the hide function when Brackets has started. It immediately hides the sidebar, it’s not 100% what we want to do but it’s a start.

DefaultMenus.js and Menus.js

Creates the entire default menu on the top of the screen such as file, edit, view, etc. Menus.js contains a remove menu function that I was able to add to the extension to remove some of the options. I still need to look into it further because it’s not quite what we need but again, it’s a start.

I will be putting in more time and effort to reach our goal of removing these user interface features. I will be updating my blog as I make progress so stay tuned.

by ajdbenner at January 26, 2015 03:06 PM

Bradly Hoover

How fast can it go?

If you were to compare software from 10 years ago to the software of today, you would say that the speed of software has vastly improved. For example, Windows 8.1 starts in less then 10 seconds where Windows 98 took over a minute. What would you do now if you had to wait a whole minute before the computer was ready to use? Complain most likely.

If you were to ask people why there was such an increase in speed, a lot of people would say the hardware technology has improved and they would not be wrong. If you took Windows 98 however, and ran it on a modern day machine (if you could find drivers for the modern hardware) it would be faster, but not as much as you think.

The speed of Windows 8.1 start time all comes down to the software. Programmers have found ways of optimizing the code that they write to take proper advantage of the hardware. They are able to look at the programs that they develop and optimize them. Without well written programs, all of that blazing fast hardware would go to waste.

Benchmarking is the first step in determining performance: time it, see how fast it actually is. That is the only way you will actually get an idea as to its performance. After the initial performance of a program is measured, changes can be made to the program. Then you benchmark it again. One key piece of software that plays an important role in the performance is the language and the compiler. I decided to take the Python compiler, build it on a system and get a baseline of the performance. Using the baseline benchmark, if tried to optimize the Python compiler, any difference in time would indicate a change in performance.

Before benchmarking, it’s very important to have information about the system. Even though we are testing out the software, the hardware does play a significant role. Some software is capable of taking advantage of multiple cores, or utilizing more memory..


The critical system specs for the benchmarking is

Total Ram       : 16716032 kB
Processor        : AArch64 Processor rev 0 (aarch64), 8 Cor

Just as important are the specs of the operation system which I used. I used Fedora 21, and used the gcc 4.9.2 compiler.

Linux version 3.17.8-300.fc21.aarch64 (
gcc version 4.9.2 20141101 (Red Hat 4.9.2-1) (GCC)

Last, but not least, I downloaded the latest development branch (at the time of testing) of Python.

Benchmarking Python was fairly straightforward. I had attempted to do the same with MYSQL server first, but after more than 6 hours of struggling to get the install working correctly, I decided to switch to Python. If it had not been for time constraints for this post, I would have taken the time to understand what was wrong and completed these tests.

I followed the steps to build and run the test. Everything went off without a hitch. When I reached the final step, and ran the test, I mistakenly forgot the –j3. This is a very important flag that I will explain in a minute. The result of the first test were:

User time (seconds): 1089.18
System time (seconds): 57.63
Percent of CPU this job got: 76%
Elapsed (wall clock) time (m:ss): 25:00.65

As with all benchmarking, you should never just go with one result. The second time, I got

User time (seconds): 1092.83
System time (seconds): 57.90
Percent of CPU this job got: 76%
Elapsed (wall clock) time (m:ss): 25:04.88

The difference in time between the two runs was around 0.3%. I would say that the results are stable.

Now, back to the –j3 flag. When going back and writing this, I discovered I omitted the flag. I looked up what it did, and it specifies how many cores or threads to use when running the test. Well, I decided to run the test again. I ran it with all 8 core, taking advantage of all the system could give me. The results were as follows:

User time (seconds): 977.81
System time (seconds): 64.61
Percent of CPU this job got: 509%
Elapsed (wall clock) time (m:ss): 3:24.55

User time (seconds): 979.85
System time (seconds): 63.25
Percent of CPU this job got: 508%
Elapsed (wall clock) time (m:ss): 3:24.93

Again, the two results do not vary in any significant way. From the results of these tests, we can see the difference made by using all the cores. There was almost an 8 times speed increase by using all 8 cores, as one would intuitively guess.

So, what does one do with this information?

We use this as the baseline for all future tests. If I were to make changes to the Python source, we could see if there were any overall performance changes by comparing the new build speed to these results. I have a feeling that in future blogs, I will be writing about just that.

The post How fast can it go? appeared first on

by Brad at January 26, 2015 05:14 AM

January 25, 2015

Gary Chau

Open Source Licenses – SPO600

This post is for my SPO600 course, with our brief introduction into open source during the first 2 classes.  We have been told to look into the development of 2 open source projects with different license agreements.

First Program up for analysis is VLC:
Developers: VideoLAN Organization
License: GNU General Public License (GPL)
VLC is a media player program created by VideoLAN Organization.  It distributes its program under the GNU license agreement.  The organization monitors its forum site for new bugs, issues that users are encountering or features that are been requested.  The forums provide a place for admins/developers to understand what current issues are and what has been solved by others.  There current means of code distribution is the use of GIT version control.  Developers can look over the source code and develop new or improve code for VLC.  Any code that they wish to be pushed into VLC will require them to send an email to the admins with the code, explanation of the code, and also their contact information.  This basically wraps up how VLC improves its code with the help of the community.

Second Program for analysis is Angular.JS:
Developers: Google
Licence: The MIT Licence
Angular.JS is a JS framework thats designed by Google to help make JS smarter and easier to code.  The code for Angular.JS is freely available on github at  Contributions to source code is done through features of github.  There are directions from the developers on how code should be submitted and in what format.  This is due to the reason that their update logs are auto generated and the format of the explanation of the code is important to how it will display on the update logs.

This exercise has helped me acknowledge that most of the open source programs that I have been using over the years have been mostly under the GNU GPL licence which seemly is more popular than the MIT licence.  This was quite interesting to know, and I will be doing some research to try and understand what the difference is between the 2 licence agreements that developers would choose one over the other.

by gchau2 at January 25, 2015 04:49 PM

January 24, 2015

Yan Song

The Beginning Is the End Is the Beginning

So, is it reasonable for us to investigate, at this early stage, the code-review and patch-acceptance mechanisms within various open-source communities? Yes, if one’d like to—and, of course, is capable of.

Every open-source project comes with a kind of licensing. Different projects may have different licenses. The two projects—GNU Binutils and Apache Maven—we’re looking at here have different licensing. GNU General Public License is applied to the former, Apache License the latter.

Yet from the patch-check-in standpoint, those two are the same. To create and track issue patches, potential contributors create and submit their patches, following the projects’ official guidelines, through each project’s issue-tracking system (Sourceware Bugzilla and Codehaus, respectively) and wait for the core developers to resolve the issues.

Sounds familiar?

by ysong55 at January 24, 2015 07:00 PM

Hosung Hwang

Mobile – App installation check 1 – iOS and Android

Determining if a user’s phone is iOS, Android, or Blackberry is important to distribute a mobile app. Also checking if an app is installed ore not, and redirecting to the app store are important. In the web page, this is possible using various techniques.

1. Determine phone OS
User agent string from mobile browsers plays important roll here.

var IS_IPAD = navigator.userAgent.match(/iPad/i) != null,
    IS_IPHONE = !IS_IPAD && ((navigator.userAgent.match(/iPhone/i) != null) || (navigator.userAgent.match(/iPod/i) != null)),
    IS_ANDROID = !IS_IOS && navigator.userAgent.match(/android/i) != null,

source :

This part is simple. It determines from navigator.userAgent if it is iPhone, iPad, iPod, or android.

2. App installation check

function checkAppInstall() {
	//schema of the app
	var url = "myprotocol://foo";
		var invisible_div = document.getElementById("invisible_div");
		invisible_div.innerHTML = //see the github
	} else if(IS_IOS) {
		setTimeout( function() {
		}, 1000);
		location.href = url;
	} else {
		alert("android and iOS only");

innerHTML part is erased because wordpress messes up.
url is a custom schema registered by mobile application, it is the same concept with URL. So this kind of additional implementation is possible.
And then inside the app, the name, value pairs can be processed like command line arguments.
Basically this URL can be opened from a web page to open the app; however, determining if the page(app) is exist or not varies by OS. So the implementation for this is different. It just tries to open, as far as I know there is no elegant way for this.
In this code, if the app exist, open it; otherwise, goMarket() is called.

3. Opening store app

var market_a = "market://details?id=com.facebook.katana";
var market_i = "";
function goMarket() {
	} else if(IS_IOS) {
	} else {
		// do nothing

“market://” is the custom schema of Google Play Store app in Android. We can open it with details including an app’s id. In the example, it is facebook app’s id.
Several years ago, Apple app store used the same way; it was “itms-apps://”. However, it is changed. Now, if we open, app store opens.
In the example, facebook app’s id for iOS and Android was used.

4. Custom schema in the native app
In Android app, custom schema can be set by adding it in AndroidManifest.xml file.

<activity android:name="MyApp" android:label="@string/app_name" 

        <action android:name="android.intent.action.MAIN" />
        <category android:name="android.intent.category.LAUNCHER" />

      <data android:scheme="myprotocol" />
      <action android:name="android.intent.action.VIEW" />
      <category android:name="android.intent.category.BROWSABLE" />
      <category android:name="android.intent.category.DEFAULT" />

Second part of intent-filter is for setting custom schema, myprotocol. This app can opened by calling “myprotol://”.
According to my test, cordova and crosswalk application also walked well by setting it.

In iOS, custom schema can be set by setting property in XCode.

5. Implementation of the web page
To introduce the user to install an app, one way is using a web page.
When I tested in the email, the link that has http:// or https:// worked inside the email page. However, custom protocol(schema) doesn’t show even like a link.

<a href="myprotocol://aaa">Check App Install</a>
<button onclick="location.href=myprotocol://aaa">Check App Install</button><br>

Both case didn’t work in the email client (Android Gmail and Mobile web browser)
Therefore, linking to a webpage that contains all these script seems to be a solution.

Next step
This works for Android and iOS. Blackberry also have user agent:”blackberry”. However, the way to detect installation need more research.
And probably for the Windows Phone and may be Tizen.

Source code is available here:

by Hosung at January 24, 2015 12:50 AM

January 23, 2015

Jordan Theriault

Embedded License Data

Right clicking and pressing save as is the most known practice for downloading a photo on the internet. There are few venues for photo hosting which will protect a photo online. Flickr for example provides right-click protection, watermarks, and original size protection. Even these features do not fully protect the piece of work and therefore we must, as developers and creatives alike, operate under the assumption online that any photo can be taken without permission.

The next line of defence is licensing information. Even if the proper information is given on the same page that the photo can be downloaded from, once it is in another location the licensing data does not come with the photo. It’s therefore difficult for someone to know who the photo is attributed to and what they are able to do with it legally.

Broxigard via DeviantArt

Embedding licenses in the image files is Moby-Dick and developers Captain Ahab. Long desired by many legal and creative entities around the web but never quite fully realized. The ability to use a library or libraries to add this easily to an application would be a huge boon to the popularity of proper licensing, which lets face it, benefits everyone.


Working with my professor David Humphrey and 4 other peers at Seneca College, we are engaging the task of creating a library or libraries which will enable a developer to attribute a Creative Commons license to image files. This has been something Creative Commons is eager to have as a tool in order to further innovate licensing.

Most popular image formats have an area which allows text to be inserted into it. As David Humphrey mentions PNG, JPEG and GIF all have fields which the licensing information, a condensed license, or even a permalink to the license can be added into.

To begin, we have divided into groups. We are first attempting to programatically insert licensing into an image file via Android by whatever means necessary.

I am investigating the Extensible Metadata Platform (XMP) file format as per Adobe’s specifications in order to develop the most appropriate method for generating and attributing a Creative Commons license. This will need to take into account the length of space available in the smallest space between major image formats, and allowing for the creator of the image to customize the license. An example of the tool Creative Commons has can be seen here.

Please also check out David Humphrey’s blog post on the topic.

by JordanTheriault at January 23, 2015 10:54 PM

Chris Tyler (ctyler)

How to Become a Good Artist

To my students: this also applies to programmers and sysadmins.

by Chris Tyler ( at January 23, 2015 07:44 PM

Anderson Malagutti

Android – Using IntentService

Hello everyone.

That is actually my first real blog. I’ve been working with Android programming in these last days, then I decided to show something that might be useful to someone else here in this blog. =)

In this post I would like to show how you can use an intent service on android. It might be very useful for you, specially if you’re trying to do a little work in “background” of your app.

Here it’s what Google says about it:

IntentService is a base class for Services that handle asynchronous requests (expressed as Intents) on demand. Clients send requests through startService(Intent) calls; the service is started as needed, handles each Intent in turn using a worker thread, and stops itself when it runs out of work.

This “work queue processor” pattern is commonly used to offload tasks from an application’s main thread. The IntentService class exists to simplify this pattern and take care of the mechanics. To use it, extend IntentService and implement onHandleIntent(Intent). IntentService will receive the Intents, launch a worker thread, and stop the service as appropriate.

All requests are handled on a single worker thread — they may take as long as necessary (and will not block the application’s main loop), but only one request will be processed at a time.

One good way to learn how the intent service works is making use of it. Therefore, we will create a pretty simple app to use it.

Firstly, let’s create our own Intent service to work with an android application.

For instance, our class will be called, “MyIntentService”, a pretty nice name. haha

Code for our MyIntentService

import android.content.Intent;
import android.util.Log;

public class MyIntentService extends IntentService {

    public String tag = "MyIntentService";

    public MyIntentService() {

    protected void onHandleIntent(Intent intent) {
       Log.d(tag, "Service started.");
       //getting the text from the intent.
       String messageMainActivity = intent.getStringExtra("message");

       //You should be able to see this message on your LogCat terminal.
       Log.i(tag, "The message from the activity is: "+messageMainActivity);

       Log.d(tag, "Service finished.");

Basically, all the time when you start the service it’s gonna call the method onHandleIntent, so your code will be executed.

Code for our Main Activity:

import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;

public class MainActivity extends ActionBarActivity {

    protected void onCreate(Bundle savedInstanceState) {

       Button button_start_service = (Button)findViewById(;
       button_start_service.setOnClickListener(new View.OnClickListener() {
          public void onClick(View v) {
             Intent intent = new Intent(MainActivity.this, MyIntentService.class);

             //Note: You may insert other types of data. For instance: Double, Integers...
             intent.putExtra("message", "Hello. You're using the IntentService to show this message.");
             startService(intent); //Starts the service using your intent.



So, it is very easy to start a service. You only have to create your intent specifying your service, then you can put some information into the intent, and use it to start your service.

It’s very important that you declare the Service on your Manifest file.

The service tag should be before the </application> tag.

<service android:name=".MyIntentService"

The XML for our Main Activity is:

<RelativeLayout xmlns:android=""

 android:text="Start Service."
 android:layout_centerHorizontal="true" />


Screen Shot 2015-01-22 at 11.10.47 PM

Finally you should be able to see it on your logcat terminal when the button is pressed.

01-22 22:36:35.765 2299-2315/com.intentservice.anderson.intentserviceexample D/MyIntentService﹕ Service started.
01-22 22:36:35.765 2299-2315/com.intentservice.anderson.intentserviceexample I/MyIntentService﹕ The message from the activity is: Hello. You're using the IntentService to display this message
01-22 22:36:35.775 2299-2315/com.intentservice.anderson.intentserviceexample D/MyIntentService﹕ Service finished.

An intent service is very easy to be used. It might be very useful when you have to do something in your app such as network communication.

Thank you.
I hope it helps somehow.

See you!

by andersoncdot at January 23, 2015 04:27 AM

newYear.start(); //2015

Hey everyone.

My name is Anderson.

I am a exchange student at Seneca College in Toronto.

2015 is just starting, and I am starting to work at CDOT!

I’ve studied Android Programming last semester at Seneca, and now I am having the opportunity to apply what I’ve learned working in a real project at CDOT.

My project is called BRAKERS!

We are starting the project creating an Android Application that will be a very important part of the BRAKERS system.

Hopefully, I am gonna have a lot of things to talk here during the next weeks!

See you!

by andersoncdot at January 23, 2015 04:26 AM

Artem Luzyanin

Lab 2 or pains of benchmarking

For Lab2 of SPO600 we had to do a benchmarking in a group. First hand-on experience was long and wearing out. We chose MySQL server for our benchmarking, and the first question came up right away: how to download the package on remote Fedora system? With the help of our professor we run “wget”, learning once and for all this command. After “un-tarring” the file, we run into the problem of how to build the package. Again, with the help of the professor, we figured out to run “sudo yum install cmake”, with “cmake” being a file in the downloaded folder. Next problem arose when we couldn’t run “cmake” command, but after reading the help on this command, we figured out to specify the location, so it became “cmake .” . Useful piece of information was told to us after we already started building the app: make –j(number from amount of cores+1 to amount of cores*2 +1) will speed up the process of making the app. Good to know! After we successfully build the program (which involved 20 minutes of looking at the running numbers), we decided to run the benchmarking suit, that came in MySQL package. We spent some time trying to figure out how to start MySQL server in Fedora, but googling it helped.
I performed “insert” statement test five times to make sure that the results are stable. The results showed the range for “usr” portion to be around 0.37s, for “sys” portion to be around 0.2s, and the total time to be around 0.57. The SQL benchmarking program is single-core, therefore the test results show MAXIMUM time it should take the server to perform these tests. The tests were performed on Fedora 21 system. This is the memory information on the server:
total used free shared buff/cache available
Mem: 8067408 359488 1879212 784 5828708 7409324
Swap: 0 0 0

This is the processor information (for each core of a quad-core processor):

vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
stepping : 11
microcode : 0xba
cpu MHz : 1596.000
cache size : 4096 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dtherm tpr_shadow vnmi flexpriority
bogomips : 4799.79
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual

The tests show that the difference in theuser and kernel CPU allocations for “insert” statement is no substantially large (less than twice), which indicates high usage of hardware.
Generally speaking, this lab was a large learning curve for me, due to my lack of knowledge of linux and benchmarking. I am still not sure what I am supposed to learn from these tests that would help me to improve the performance of the application. I hope, by the end of the course this question will get an answer…

by lisyonok85 at January 23, 2015 01:54 AM

Comparison of different OSS licences

For the first post, related to SPO600, I would like to talk about couple of Open-Source Software (OSS) licences, and couple of applications that use them. As a math-lover, I will talk about two math-related applications: MAXIMA and Scilab.

MAXIMA is a math application written in Common Lisp, that features the interface for algebra calculations and visualization. MAXIMA is released under GPL, which stands for GNU General Public Licence. GPL “guarantees end users (individuals, organizations, companies) the freedoms to use, study, share (copy), and modify the software.” (Wikipedia) On the example of this patch, I was able to track  the life cycle of MAXIMA’s updates. The process is rather simple. One would post a Patch Ticket on the app’s website, listing his findings and attaching code to be applied to the app. The findings and the code then are reviewed by a support team member and, depending on the whether or not the code was deemed to be an upgrade to the existing code, it might or might not to become an official patch. Usually only two people are involved in the whole patch process: person who submitted the ticket and person who approved and made the code into patch. Time passed in between code submission and patch release varies greatly, ranging from few hours to 6 months. Sometimes the suggested code is modified or rejected, if  it creates another problem in the code, or simply needs to be improved, or if the author didn’t  have the most updated version of the program, which led him to think there is an unresolved problem.

The approach of GPL seems to be simple and straightforward, which on its own is a positive side. The code, submitted for the review is available to other members of the community, allowing for reviews by everyone, not only the app developers. On the other hand, since the project is usually developed and reviewed by only couple of people, it is prone to errors. Since some patches are implemented within very small review window,  they might not have enough eyes to spot a problem. At the same time, since those reviews are not assigned to someone in particular, sometimes it will be many months before some developer will decide to take a closer look at the supplied patch code.

Personally, I would need to learn more about the technique of creating a .patch files to be submitted with a Patch Ticket. Other than that, the process of submitting the patch is so simple, that one wouldn’t need to know much more in order to share their knowledge and efforts with the community.

Scilab is a more complex software, ” used for signal processing, statistical analysis, image enhancement, fluid dynamics simulations, numerical optimization, and modeling, simulation of explicit and implicit dynamical systems and (if the corresponding toolbox is installed) symbolic manipulations.” (Wikipedia) It was developed in the combination of Scilab, C, C++, Java, and Fortran under CeCILL. CeCILL is a GPL-compatible open-source software licence, that was created for and supported by French people. Up until 2005, English was considered only a draft language for applications developed under CeCILL. Another interesting point of CeCILL is that it is protected by French court and French law, although the sides can mutually agree to be working under jurisdiction of another country.

This page describes many ways a person can hellp Scilab community. As you can see, translating Scilab is one of the common ways to do so, due to its’ licence specifics. To help with patching a source code, one would need to become an official Scilab contributor. Then it is possible to submit a patch topic to the developer’s website. Due to a heavier than GPL legal background of CeCILL, contributors have to comply to the legal rules. An official Scilab contributor can submit and implement a patch whenever he likes, making him the only person involved in the patching process. If a person is not an official Scilab contributor, then he can submit the proposed patch to the supporting team for a review.

The biggest benefit of CeCILL way of patching is the fact that if a person is an official contributor, then there is no time needed to implement the patch, and only one person is involved in it. On the other hand, only one person is involved in it, so this approach is very prone to errors. In order to contribute to this project, I would definitely have to become an official Scilab contributor, to avoid the unnecessary and lengthy process of reviewing a patch. I would also have to learn the process of applying a patch on my own.

After researching two different open-source licences, I am definitely leaning towards the CeCILL due to reduced times required to apply a working patch.In my opinion it makes sense to be able to improve a program when a patch is available, not when a support team member iwll have a time for it.

by lisyonok85 at January 23, 2015 12:52 AM

January 22, 2015

Hosung Hwang

[SPO600] Baseline Builds and Benchmarking Lab

Our group chose php between LAMP stack for SPO600 Baseline Builds and Benchmarking Lab.

After figuring out how to do it in the class, I am doing it by myself in both servers : red(ARM64) and australia(x86).

CPU and memory information of red server:

Hardware	: APM X-Gene Mustang board
[hshwang2@red ~]$ lscpu
Architecture:          aarch64
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             4
[hshwang2@red ~]$ free
              total        used        free      shared  buff/cache   available
Mem:       16716032      274048     6661504        9216     9780480    16239040
Swap:             0           0           0

This web page is useful to look at hardware information :
16 commands to check hardware information on Linux

1. Downloading php source code
Source webpage :
We used “Current Stable PHP 5.6.4″ version.

[hshwang2@red phpbench]$ wget -O php-5.6.4.tar.gz

2. Unpacking

$tar -zxvf php-5.6.4.tar.gz

3. configure


4. build


I did this at the same time in both red and australia server. In terms of building time, red server was significantly slower than australia. May be compilier itself will need to be optimized. However, it is not an issue for now.
The other build option need to be tested.
Output file of this building is “php-5.6.4/sapi/cli/php”. I set this as an alias in ~/.bashrc. I would not have problem since there was no pre-installed php.

alias php='~/phpbench/php-5.6.4/sapi/cli/php'

However, I think naming alias php_date_option_version kind of name will be useful to distinguish in the future.

5. build test
build test show following result in red server. It looks not good, but, for now, let’s skip.

Exts skipped    :   52
Exts tested     :   27

Number of tests : 13419              8895
Tests skipped   : 4524 ( 33.7%) --------
Tests warned    :    0 (  0.0%) (  0.0%)
Tests failed    :    5 (  0.0%) (  0.1%)
Expected fail   :   31 (  0.2%) (  0.3%)
Tests passed    : 8859 ( 66.0%) ( 99.6%)
Time taken      :  385 seconds
session rfc1867 sid only cookie 2 [ext/session/tests/rfc1867_sid_only_cookie_2.phpt]
Test session_set_save_handler() function: class with create_sid [ext/session/tests/session_set_save_handler_class_017.phpt]
Test session_set_save_handler() function: id interface [ext/session/tests/session_set_save_handler_iface_003.phpt]
Test session_set_save_handler() function: create_sid [ext/session/tests/session_set_save_handler_sid_001.phpt]
Test strncmp() function : usage variations - binary safe(binary values) [ext/standard/tests/strings/strncmp_variation6.phpt]

Result in australia:

Exts skipped    :   52
Exts tested     :   27

Number of tests : 13419              8894
Tests skipped   : 4525 ( 33.7%) --------
Tests warned    :    0 (  0.0%) (  0.0%)
Tests failed    :    0 (  0.0%) (  0.0%)
Expected fail   :   31 (  0.2%) (  0.3%)
Tests passed    : 8863 ( 66.0%) ( 99.7%)
Time taken      :  257 seconds

6. Getting PHP benchmarking script
There was an useful benchmark script.
here :

[hshwang2@red phpbench]$ wget
[hshwang2@red phpbench]$ unzip 

This script performs 140000 times of math functions operation, 130000 times of string manipulation, 19000000 times of for loop, and 9000000 times of if/else statement.

7. Result
In red server:

[hshwang2@red phpbench]$ php bench.php
|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 20:40:49
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 2.339 sec.
test_stringmanipulation   : 2.535 sec.
test_loops                : 1.853 sec.
test_ifelse               : 1.334 sec.
Total time:               : 8.061 sec.
[hshwang2@red phpbench]$ php bench.php
|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 20:45:54
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 2.341 sec.
test_stringmanipulation   : 2.549 sec.
test_loops                : 1.853 sec.
test_ifelse               : 1.333 sec.
Total time:               : 8.076 sec.
[hshwang2@red phpbench]$ php bench.php
|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 20:47:19
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 2.348 sec.
test_stringmanipulation   : 2.563 sec.
test_loops                : 1.853 sec.
test_ifelse               : 1.335 sec.
Total time:               : 8.099 sec.

In australia :

[hshwang2@australia phpbench]$ php bench.php
|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 20:45:58
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 1.534 sec.
test_stringmanipulation   : 1.647 sec.
test_loops                : 1.435 sec.
test_ifelse               : 1.127 sec.
Total time:               : 5.743 sec.
[hshwang2@australia phpbench]$ php bench.php
|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 20:57:00
Server : @
PHP version : 5.6.4
Platform : Linux
test_math                 : 1.553 sec.
test_stringmanipulation   : 1.678 sec.
test_loops                : 1.169 sec.
test_ifelse               : 1.307 sec.
Total time:               : 5.707 sec.

Just for reference, In my labtop :

|        PHP BENCHMARK SCRIPT        |
Start : 2015-01-22 16:11:37
Server : @
PHP version : 5.5.9-1ubuntu4.5
Platform : Linux
test_math                 : 1.202 sec.
test_stringmanipulation   : 1.230 sec.
test_loops                : 0.798 sec.
test_ifelse               : 0.650 sec.
Total time:               : 3.88 sec.

I think this benchmarking was not very reliable. However, after increasing loop count, optimization test by changing code and build option will be possible.

by Hosung at January 22, 2015 09:21 PM

Neil Guzman

CDOT demo #2

Demo #2 was actually 2 days ago on the 20th, but here I am.

The demos were interesting and ended up in a cliffhanger (was excited to see the last demo). Our group demoed Gabriel's encoding library in Java and I sort of demoed the current Python server. The current state of the server at that time was that it was able to receive strings through an SSL connection and make the server do various stuff (like assigning the connection a client type and sending messages to the client).

On my previous entry I mentioned how our team was going to use Tornado and how cool it was and such. Turns out, after playing with it for a bit, it lacked in resources (lower level networking) and didn't do the stuff I wanted to do, as simple as I thought it would.

We ended up switching over to Twisted. According to this site (which I found right after the demos and some tinkering with Tornado), Twisted seems to be the better choice because of its greater control on the network, and also because we are not designing the system for the web. So far there hasn't been an issue. This is (hopefully) the last change in frameworks.

The server at its current state can handle many secure connections and can decode the encoded messages that the clients send. After finishing the decoding, I will be moving on to sending messages to the client and other stuff. Our small test today involved connecting our personal phones to the server along with some emulated phones on the computers. We encountered an issue with older phones, but it has been fixed.

by nbguzman at January 22, 2015 09:20 PM

Anderson Malagutti

Hello world!

This is your very first post. Click the Edit link to modify or delete it, or start a new post. If you like, use this post to tell readers why you started this blog and what you plan to do with it.

Happy blogging!

by andersoncdot at January 22, 2015 09:07 PM

Gabriel Castro

Accepting all SSL certificates in Java

This week has been mostly about getting communication working between the client apps and the server.

While in production all data transfers will be done over SSL it's nice to be able to use a self signed certificate for testing and development.

The following example shows how to open a socket in java that will accept and SSL certificate.

DO NOT do this in production.

    private static SSLSocketFactory turstingSSLSocketFactory() throws NoSuchAlgorithmException, KeyManagementException {
TrustManager[] all = new TrustManager[]{new X509TrustManager() {
public void checkClientTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException {

public void checkServerTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException {

public X509Certificate[] getAcceptedIssuers() {
return null;

SSLContext sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, all, new SecureRandom());
return sslContext.getSocketFactory();

public static void main(String[] args) throws Throwable {
try (
Socket socket = turstingSSLSocketFactory().createSocket("", 9999);
OutputStream out = socket.getOutputStream()
) {
out.write("Hello World!".getBytes());

by Gabriel Castro ( at January 22, 2015 08:58 PM

Christopher Markieta

Returning to Python

My programming career started as a teenager in high school. Some of the first computer languages I learned were Turing and C, but that was only to satisfy the requirements in some of my courses. When I decided to pursue a programming career, I chose Python to harden my Computer Science concepts. I used it for about a year, completing assignments that I found online from other universities, and starting my own projects such as a web scraper and an email auto-responder. It has been a few years since I have really gotten back into using Python, and I'm finding it slightly difficult to avoid my C++ syntax.


After mastering high school Calculus and experiencing the torture of Engineering Calculus AB at the University of Toronto, I have yet to take full advantage of my knowledge in Math, and my skills are getting a bit rusty.

In order to calculate the velocity using 2 points on a map, we need to find the displacement between the points and the timestamp given for each sample. I will leave the velocity in its X and Y components for simplicity and later calculations.

Map Coordinates

I would like to store map coordinates in time in seconds only, but online I could only find methods for converting between Latitude Longitude to Degrees Minutes Seconds (DMS).

However, according to the University of Nebraska Lincoln, I can easily multiply latitude and longitude by 3600 to get position in seconds.

1° = 60’ = 3600”

Total Seconds = 60(60degrees + minutes) + seconds
Total Seconds = 3600latitude
Total Seconds = 3600longitude

by Christopher Markieta ( at January 22, 2015 08:39 PM

Hosung Hwang

[SPO600]SSH connection with .ssh/config

In SPO600 course, I need to connect frequently to both ARM64 machine and X86 machine using ssh. So I am posting again about ssh connection with more information.

In my another posting(SSH connection without entering password), I explained how to make public/private key pair and how to set it up in the remote server.

Even after doing that, in the client machine, the user should type all ssh command like this:


There are several ways to minimize typing.

1. ~/.ssh/config
if there is no config file, make it by touch config and put this:

Host "server1"
    hostname ""
    user username
    port 1234
    IdentityFile /home/user/.ssh/id_rsa

Host "server2"
    hostname ""
    user username
    IdentityFile /home/user/.ssh/id_rsa

IdentityFile can be used optionally to specify private key, and “port” for port number.
Now the user can simply connect by typing ssh server1.

2. set alias in ~/.bashrc
If typing ssh server1 is still long, we can make an alias in .bashrc:

alias S1='ssh server1'
alias S2='ssh server2'

Now we can simply type S1 or S2.

Actually if the alias is written like this:

alias S1='ssh'

the .ssh/config setting is not necessary.
Only for ssh this way would work, however, for sftp and scp as well as ssh, setting .ssh/config file is a good solution.
Followings are possible commands to interact with ssh:

ssh server1
sftp server1
scp testfile server1:

by Hosung at January 22, 2015 06:33 PM

Kieran Sedgwick

How to dig through an app’s view rendering system

Ever come across a web application on github where all you’re concerned with is the client-side portion? Ever realize that you still need to dig through the back-end in order to understand how the hell it generates all of those client-side views? Ever give up, make a cup of tea, cry a little bit and watch hours of Looney Tunes on YouTube? Well I have. Except that last part, which I will neither confirm nor deny.

But I’ve found it does get easier. Here are some tips:

1. Locate the name of the rendering engine

How hard can this be? Pretty hard. Using nodejs apps as an example, their actual code can be laid out in so many unique-as-a-snowflake-isn’t-it-beautiful ways that it isn’t as easy as it appears. But it can be done.

Look through the app’s manifest file for templating/rendering engines you recognize. If you don’t recognize anything, but you know views are generated on the server (here’s looking at you “/views” directory!) do a global search for any of the individual view’s file names. The goal is to trace the code back to the part where the app configures its rendering engine. Once you know what they’re using, you’ll know what documentation to look up.

2. Locate the view or fragment you need, and figure out how it connects to the main view

If the view you’re looking for is a full view, rather than a fragment, skip to the next step.

Otherwise, look through the main views and see which of them pulls in the fragment you’re looking for. Often starting with the index view is a good idea.

3. Find and read the render call, and trace where the data goes

Consider this your entry point into the views proper. Cuz it is.

For view fragments, finding where the parent view is being rendered is key. The most important variables are often passed in this step and are then propagated to view fragments from there.

4. Use this chain to understand the code you wanted to know in the first place

Now you have a direct line between app and view, and you can see what information is being passed in from here. Follow the bouncing ball, take deep breaths, and it’ll all be fine.

If not, there’s always Looney Tunes!

by ksedgwick at January 22, 2015 05:19 PM

Alfred Tsang


The system that was used in the testing was PHP.

PHP has a lot of files when downloading.  It took 5 min to download the files, and building it took 25 minutes.  Running the software took around 40 minutes.

This is my first time doing software and this lab was fun, but difficult.

by kaputsky263 at January 22, 2015 05:09 PM

Hong Zhan Huang

SPO600: The first lab… Open Source Contributions

The first lab… the first quest! How exciting. Let’s learn how to play the game of open source software.

The primary objective this time is to cover the process of how someone can contribute to an open source project and what it all entails. To do this I looked into the following two projects OBS – Open Broadcaster Software and Anki. And without further ado let’s begin.

OBS – Open Broadcaster Software

License: GNU General Public License v2



IRC: #obs-dev on Quakenet


OBS is open source software built for the purpose of video recording and live streaming. It serves as a no cost alternative to commercial streaming oriented software such as Xsplit. Originally OBS was only available for the Windows platform but a multiplatform successor is in the works, which will support Windows, OSX and Linux platforms as well as bring along many new enhancements and features.

Contributing Code:

The whole of OBSproject’s guidelines regarding contributions of code can be found at

The long and short of it seems to be:

  1. Discussion is a forefront virtue in open source development. Conversing with others on the forums/mailing lists/irc prior to working on the project or an aspect of it is highly suggested.
  2. The coding style is Linux-style KNF and is to be strictly followed. A general guideline of this style can be found at (there is an subtle exception for C++ style in that camelCase is encouraged to differentiate it from C code)
  3. Commits are more than just code changes. They should be well documented and formatted properly. All pull requests will be reviewed by the primary maintainers.
  4. The core code is C and it is highly suggested to use C unless an API requires the use of C++, Objective C or otherwise. On the same note, use dependencies only when needed and not for the sake of convenience.
  5. There is a bug tracker that lists items that need to be worked on in to aid prospective contributors in what they could assist with.

One thing to note that isn’t stated in the above guidelines is that while the source code is located on the project’s github directory (where the commits and pull requests occur), the github issues feature isn’t used. The bug tracker that the OBS project uses instead is called Mantis and it is located here: For contributors the forum’s accounts are linked to this bug tracker.

Pull Request Tracked:

Time needed to resolve/implement: 2 days

This pull request involved adding buttons to a dialog box. Communication occurred between the author of the pull request and the project’s main code maintainer who also happens to be the originator of the project. It seems that for many if not all the pull requests for OBS are reviewed by this maintainer before it is committed. The conversation seems short and concise as little needed to be changed from the code of the request author. The maintainer however made minor changes in regards to grammar and style which he noted. Although the conversation is short it seems to me that the code maintainers are quite responsive given that it took only a couple of days approve the commit. Other pull requests I’ve looked at seem to follow suit.


License: GNU Afferro General Public License 3




Anki is a open source flashcard application designed for the purpose of being a enhanced set of flashcards that make remembering things easier and more efficient than the traditional studying methods. Anki allows users to create all sorts of flash cards for various languages and with the ability to embed audio, images and videos. Users can share their flashcard decks and user created add ons with others directly and over the AnkiWeb network. Anki is available on Windows, Mac, Linux, Android and iOS.

Contributing Code:

The prerequisites prior to participating in the project are located at: &

The main topic deals with the preparatory work that needs to be done in order to get started with contributing which deals with proper installations of the dependencies that Anki uses (they seem to mostly be python and python related packages). Beyond that and requesting contributors to carefully read the AGPL license there does not seem to be much in the way of coding guidelines or other descriptive text of how the contribution process occurs. Perhaps it is because I’m not well versed in python that I’m unaware of some unified coding standard for the language but this is certainly a contrast to the OBSproject’s much more defined ruleset. I’ve found that the forum they use doubles as a bug report venue and as general support in usage of the application. Much like OBS Anki also foregoes using git’s issues feature which is something to note.

Pull Request Tracked:

Time needed to resolved/implement: N/A

This pull request involved an issue brought forth by a user who had an odd pixilation effect on the icon that’s used in an Anki add on. Interestingly the author of this pull request is also the creator of said add on. The conversation here is between the creators who are also primary maintainers of their respective software. The communication is responsive here as well with an easily seen back and forth motion in the flow of discussion. Seeing that pull request author is an individual who is involved with code contribution, most issues are deferred to his expertise as long as it doesn’t break other existing add ons. The end result of this pull request was its withdrawal due to the author not finding a working solution for all the affected platforms of this issue. Despite this the conversations had brought out ideas on potential solutions, the scale of how many add ons this issue affected and how future releases could serve to fix this issue.


Even at this cursory glance into what procedures are required to begin contributing to these two projects really gives off the scale of what one needs to do in order to contribute. Personally I think I would need to fully comprehend the whole of each license as the first step. It is a good bulk of information that shouldn’t be skimmed. Following that, establishing and entering the proper communication channels with the current band of contributors and maintainers looks to be key. Delving into their wealth of knowledge and understanding the expectations is a boon in itself but establishing relationships is a bigger one still. This is all before even learning about the code base to start producing even a line of code which would then come next.

Some other final thoughts:

OBS: Their methods are very well laid out and are easy to comprehend. There is a particular strictness to it all but I would think this kind of structured way of doing things is efficient and preferable.

Anki: In contrast to OBS things seem a bit looser on just a surface glance which does give me a misty impression as to how contributions are processed but perhaps this more ‘free’ seeming methodology has its perks as well.

Both: What both projects seem to exhibit is a good sense of communication between the primary maintainer and the contributors. This is concluded from a brief look only into their git repositories and forums/bug trackers. Having not explored their other avenues of contact such as their mailing lists and IRC I can only imagine that there are many more talks in places I’ve not been.

The other point that’s on my mind is that both projects did not make use of git’s built in issues feature for bug tracking and instead used different methods. OBS’ reasoning to abandon it was that it in essence created an additional forum a top their own that needed management. I wasn’t able to ascertain the reason behind Anki not utilizing it but perhaps the reasoning is similar.

Quest completed! Obtained SSH public and private keys. End log of an SPO600 player until next time~

by hzhuang3 at January 22, 2015 07:41 AM

Jan Ona

SPO600 – Open Source Project Submission

For SPO600, I am required to research 2 open source projects with details on how code/bug fixes are sent and potentially used on the official release of the application. For this research, I chose Blender and StepMania.



Blender is an open source 3D animation program that I’ve worked with during high school. It features many functions such as modeling, rigging, animation, physics simulation, rendering and other big features such as providing a built in game engine. The program also features an API that allows users to add new tools written in Python. Blender is built under the GNU General Public license (GPL).

Code Submission

Blender’s main website contains an entire section for development in addition to a wiki used for both the end-users and developers of the project. Due to the complexity of the project, Code submissions are separated into 2 sections, “BF Blender” for the official release code and “Add-ons” for optional features provided by people outside the community.

Code/ Bug fix submissions uses Phabricator, a platform that uses a collection of tools for Git and reviewing code. Submission are then reviewed by a volunteer that is assigned to the module that the code is related to. This process can vary depending on the amount of code that are needed to be reviewed. If accepted, the patch/additions will be added on the next release cycle.

Steps for submission:

1. Create a diff between the new code and the original code.

2. Add descriptions and title.

3. Add reviewers

4. When the reviewers deem the patch acceptable, the patch will be accepted. Otherwise, the patch will be considered in need of revision and will need to be updated.

The entire process can take from a few days* to weeks as seen here*.

*- needs an account to to view



StepMania is a free dance and rhythm game for Windows, Linux and Mac. It was initially developed as a simulator to Konami’s Dance Dance Revolution and eventually updated with features such as an editor. The software is free and can be for personal or for arcade use. It features multiple controller support such as keyboards and dancepads. The software and its site is under the MIT License.

Code Submission:

Due to the small developer community that StepMania has, the project development takes a more casual approach when it comes to contributions. Steps for code submission is all done as pull requests via Github, where users can discuss the patch/feature before being tested by Travis CI. The pull requests can then be merged to to the main branch by the moderators.

The entire process usually only takes a few hours, since the submission is more direct to the main contributors of the project.

by jangabrielona at January 22, 2015 05:57 AM

Thana Annis

Contributing to Octave

Octave is a interpreted language that’s used for numerical calculations and it uses GNU General Public License v3.

To contribute to this project you must first join a mailing list. The more discussions you have in the mailing list the more likely your changes will be included in a release. There is a wiki for possible projects you can work on, and there is a bug tracker where you can pick any outstanding bugs to try and fix.

Octave uses Mercurial to manage code changes so you need to makes queues to submit any changes. With this you are able to clone the repositories to your machine to get the source code to work on.

To submit a change to this project you will need to commit the code through Mercurial, following the guidelines. Ex. if you are submitting a bug then you need to include not only a message about what your commit encompasses but also the bug number that you are fixing so that it can be tracked.

Next you will go to to submit a patch. You can also go to bugs or tasks and find the submit page for those. After you submit the change you made it’s added to the list to be looked at. Your change can be assigned to a specific person to review and it looks like a lot of the changes are responded to within 24hrs.

Looking specifically at bug#32924 which was submitted in a patch, the original submission date was Mar 27, 2011. There was some back and forth discussions to try and pin point the problem, then on April 2nd a change was introduced through the steps mentioned above. After a code review process, the patch was deemed worthy of applying on Oct 2nd, 2011.

by bwaffles91 at January 22, 2015 04:26 AM

January 20, 2015

Kieran Sedgwick

A Github contributor/project guide

I wrote a guide to using github in a workflow as part of a course last semester. Here’s a quick dump of it:

A Github Overview

This document covers my understanding of best practices using Github and git as version control and issue tracking tools. We may not need to implement all of this stuff, but the majority of it will be helpful to use consistently.

First Steps: Setting up a development environment for a new project

To follow the workflow I describe here, a couple of prerequisites have to be satisfied.

First, clone the central repository

This assumes basic git knowledge, so I won't cover the details. If you don't want to put in your Github user/pass every time you push or pull, you should set up an ssh key locally and with Github.

Then, fork the central repository

Because our work will eventually be merged into a central code repository, that represents the latest official version of whatever the project is, we need a way to store our own work on github – without affecting the central repository. The easiest way to do this is to fork a repo:

  • Navigate to the main repository on Github and click "Fork", then your account name.

Forking a repo

Selecting your user

Finally, set up your local git remotes for this project.

Git remotes are references to non-local copies of a git repository. For a useful workflow, at least two are required:

  • origin – Points to your fork of a project's repository
  • upstream – Points to the main repository for a project
  1. We must change rename the remote named origin to upstream with git remote rename origin upstream. By default, git will set the origin remote to point to the repository you cloned from. In this case, assuming you've followed these instructions, that will be the main repository rather than your fork.
  2. Add your fork of the repo as the origin remote with git remote add origin GIT_URL

Working with issues

An issue or bug describes a specific problem that needs to be solved on a project. Some examples are, fix a crash, update documentation or implement feature A. On occasion, issues will be so big that they could be better described as a series, or collection, of smaller issues. An issue of this type is called a meta issue, and are best avoided unless completely necessary.

Issues serve a number of non-trivial purposes:

  1. They scope a piece of work, allowing someone to take responsiblity for it.
  2. They provide a place for discussion of the work, and a record of those conversations
  3. If well scoped, they provide a high-level view of what needs to be accomplished to hit release goals.


Github provides a way to mark issues with labels, providing an extra layer of metadata. These are useful in cases that are common on multi-developer projects:

  1. Prioritization of issues, marking them as critical, bug, crash or feature (among others)
  2. Identifiaction of blockers, by marking connected issues as blocked or blocking
  3. Calls to action, such as needs review or needs revision

Applying a label

Creating labels is fairly easy:

Creating a label


A blocker, with respect to issues, is an issue whose completion is required before another issue can be completed. With good planning blockers can mostly be avoided, but this isn't always true.

If an issue is blocking another issue, label it as blocker and in the issue description, mark which issue it blocks:

A blocker

Likewise, if an issue is blocked, label it as blocked and mark which issue blocks it:


Creating an issue

The line between over and under documenting work with issues is thin. Ideally, every piece of work should have an issue, but this relies on skillful identifcation of pieces of work. "Implement a feature" is a good candidate, while "add a forgotten semi-colon" probably isn't.

The key point to remember is that collaboration relies on communication, and issues provide a centralized location for discussion and review of work that is important to a project.

For this reason, as soon as you can identify an important piece of work that logically stands on its own, you should file an issue for it. Issues can always be closed if they are duplicates, or badly scoped.

After identifying a good candidate, follow these guidelines when creating an issue:

  1. Name the issue with a useful summary of the work to be done. If you can't summarize it, it's probably a bad candidate.
  2. Describe the issue properly. If it's a crash or bizzarre behaviour, include steps to reproduce (STR)!

Milestones, project planning and triage

Just like issues represent a logical unit of work, milestones represent logical moments where development hits a larger target. They can be useful for prioritizing issues, and can even have due date attached to them. They aren't always necessary, but can be very helpful when skillfully determined.

In a project you are a key member of, they should be discussed. The act of triaging is prioritizing issues and making sure that the most important ones are addressed first. Milestones can be useful in this pursuit.

While creating an issue, you can add it to a milestone easily:

Adding to a milestone

Workflow basics

A workflow all the work other than writing code that goes into fixing a bug or solving an issue. The actual writing of code fits into the workflow, but it is useful to seperate the ideas at first.

The steps in a workflow will logically flow from the contribution guidelines of a particular project, but a good framework can be established and applied in most cases:

  1. Claim an issue, usually by assigning yourself to it (if you have permissions) or by commenting on the issue saying you want to solve it.
  2. Create a local branch based on master, whose name indicates which issue you've selected, and what the issue covers. E.g. git checkout -b issue3-contributorGuidelines
  3. Develop your patch, and commit as needed
  4. When ready for a review, push the branch to your fork.
  5. Open a pull request against the main repository.
  6. Flag a reviewer so they can get to work reviewing your code.
  7. Follow the review process
  8. When the review is finished, condense your commits into their most logical form (see below) and force push your changes with git push origin -f BRANCH_NAME. NOTE: This will overwrite all the commits on your remote branch, so be sure you won't lose work
  9. Merge your code in if you have permissions, either on github itself or though the commandline.
  10. Delete your local and remote branches for the issue. You've done it!

Good commits

Like issues, commits must be well scoped. At most you should have one commit per logical unit of work. If issues are well scoped, this means one commit per issue. The purpose of this is to make it easy to undo logically separated pieces of work without affecting other code, so you might end up with more than one commit. Aim for one as you start, and it will keep your work focused.

As a final note, a good format for your commit messages is: "Fixed #XXX – Issue summary", where XXX is the issue number. When done this way, the issue you reference will be automatically closed when the commit is merged into the repository.

Opening a pull request

A pull request is a summary of the changes that will occur when a patch is merged into a branch (like master) on another repository. Opening them is easy with Github.

After pushing a branch:

Quick file


Manual file

As always, make sure to communicate the pull request's purpose well, along with any important details the reviewer should know. This is a good place to flag a reviewer down.

The review process – having your code reviewed

During review, you and a number of reviewers will go over your patch and discuss it. When you need to make changes to the code based on a review, commit it seperately from the main commits of your work for the issue. This helps preserve the comments on the pull request.

When your code has reached a point where it is ready for merging, you can combine your commits into their final form with the interactive rebase command. Interactive rebasing is a key git skill, but has serious destructive potential. Make sure to read the link in this paragraph in full before attempting it.

The review process – reviewing someone's patch

A reviewer has two important jobs, sometimes split amongst two or more reviewers:

  1. Test the code
  2. Walk through the code thoroughly, commenting on changes that should be made.

Be polite, and explain your comments if necessary. If you aren't sure about something, invite discussion. The code's quality is the point.

A major difficulty for reviewers is finding time to review when writing patches of their own. This can be mitigated somewhat by discussing it with contributors ahead of time, so you can both be working on the code at once without interrupting development of your own patches.

Comments can me made directly on code in a pull request:

Adding a comment

Proper communication on Github

Issue tracking's main appeal is providing a place to solve problems through discussion, and have that conversation available to reference from that point on. Pull requests and issues usually require some conversation. Key guidelines are mostly common-sense (respect each other, etc.) but some specific ones are:

  1. Check your github notifications at regular intervals, so people get the feedback they need.
  2. Learn the github markup language (a variant of markdown) to help communicate with code examples, links and emphasis.
  3. Control expectations by being explicit about what you can and cannot handle.

by ksedgwick at January 20, 2015 05:26 PM

Kenny Nguyen

Pre Presentation Blog

Writing this blog before presenting to everyone, I'm typing this rather slow, maybe 20 words per minute.

I've started trying to learn dvorak again, I've memorized where most of the keys are, but they are not mapped to my muscles yet I presume.

Thats totally irrelevant though I guess, I should really be talking about my progress.


We've basically finished everything that needs to be done, we do have some bugs though. For example:

  • Inline error display breaks if user uses a theme other than default

Other issues stem from mostly a UX standpoint, the gutter warning isn't very intuitive.

Slowly finding more issues by trial and error

by Kenny Nguyen at January 20, 2015 05:26 PM

Kieran Sedgwick

Following a spec: Research tips

Porting is not a process I’m familiar with and it shows. I’m working on an issue for filerjs, a JavaScript implementation of the POSIX filesystem standard, where I have to implement the mv utility found in linux terminals.

Filer is very interesting to me. It’s the first JavaScript library I’ve worked on that overhauls the usefulness of JavaScript and as a result of it being an implementation of a standard, I’ve had to do careful research to ensure that I meet the spec. This hasn’t been too difficult because of the excellent documentation filer has, but I had to be careful.

First, read the spec carefully

The main purpose of a port is to provide the same functionality as any other place the ported utility exists. The first thing to do is understand the API, whether this is user level (“This command is invoked in the following manner, with the following options specified like so…”) or developer level (“Instances expose the following methods…”). It’s senseless to move on without this.

I was able to identify the method of invoking mv, the format of it’s arguments and the errors it would return when things went wrong.

Next, test your understanding with an existing port

You did it! All of the knowledge in the spec has been absorbed and filtered, forever changing the way you see the world. Now, confirm it. Find a working, to-spec version of the library in question and observe how it works to confirm your understanding. Is it a developer library? Look at the unit tests. Is it a user-level tool? Use it! And look at the unit tests (if you can find them without much effort).

I discovered that the mv utility behaved strangely in circumstances I thought I understood. Re-reading the spec clarified this for me. Good thing I checked!

Then, read the spec carefully

Hah! And you thought you were finished.

Since it’s a port, it means you aren’t reinventing the wheel. This is a good thing. In fact, the spec might give valuable clues and this is when you look for them. Does the spec itself specify how something should be implemented? Does it rely on other packages/libraries/protocols?

I found that mv relied on rename in most situations, and the situations it didn’t also didn’t apply to my use case.

And so…

Read the spec, look at existing examples, make sure you understand what you’re shooting for and build it based on what’s already been designed and built. Huzzah!

by ksedgwick at January 20, 2015 03:44 PM

Klever Loza Vega

Week 2

Week 2

Throughout my second week of working at CDOT I continued to work on the Brackets extension project. I also became more familiar with Git and GitHub. More importantly, I learned a proper way of working, collaboratively, on GitHub.

Git and GitHub

Git is a distributed version control system software. Among other features, it helps keep track of our work and revert back to previous states, if needed, in a snap. GitHub is a Git repository hosting service. It allows us to work collaboratively on a project with people all around the world. The official Git documentation is a great place to start learning about Git. GitHub also has a guide on how to setup and get stared with GitHub.


As we continued to work on the Brackets project, my colleagues and I realized that we weren’t working as efficiently as we could. We were all working off one master repository, pushing and pulling constantly and fixing many things at once. This made it difficult to tell what had been changed and who made that change. This is where issues and bug tracking comes in handy.

We were introduced to the Issues feature on GitHub. Essentially, it keeps track of tasks to be done and/or bugs to be fixed in a project. Once created, they can be assigned to people and they can work on it. Once the task is completed, the person assigned would initiate pull request, the code would (hopefully) be approved, and that issue would be closed. This keeps the work process clean and tidy.


Unfortunately, we didn’t follow this procedure from the beginning. Our code started to become quite messy. Thankfully, apart from the issues feature, our colleague also suggested we follow a proper workflow plan. The plan he suggested goes like this:

  1. If working off somebody else’s repository – make a fork to your account and clone it to your desktop.
  2. Make another branch and work from there. The point is to keep your master branch clean and in sync with the upstream master branch. It is suggested you make a different branch for each issue/task you’re working on.
  3. Once you’ve made changes and committed, push the changes to your branch on GitHub.
  4. If you navigate to your branch on GitHub it will most likely prompt you to do a pull request automatically. If not, you can always do one manually.
  5. Once the pull request is sent it has to be approved by the owner of the master branch you forked from.

You would follow these steps over as necessary. Of course the preceding steps assume you’re working in a perfect world without any problems. However, that’s not always the case as we found out this week.


When we were trying to do a pull request to the upstream master branch we ran into a problem. The problem had to do with the commit history. The solution was to do a rebase. The point of this rebase was to change the order of commits by altering the commit history. Warning! Messing with the commit history can cause problems later on so make sure you know what you’re doing. It’s best to read the documentation and decide which is the best solution for your situation. These are the steps we took to solve our problem:

  1. Make sure your master branch is synchronized with the upstream master branch.
  2. Switch over to the branch that was giving you problems, the command is:
    git checkout your-other-branch
  3. Run
    git rebase -i master

    Here you’re rebasing your other branch with your master branch.

  4. You will then be presented with a screen containing all the commits from your other branch that are not in your master branch. Here, follow the prompt and decide which commit you want to pick and which one(s) you want rename, squash, etc.
  5. If it fails – like it happened to us – you most likely have a merge conflict. Fix this conflict and type
     git rebase --continue

    Hopefully it succeeds this time.

From here you can continue to do a pull request as you normally would.


This week we continued to work on our extension project. We now have a red button beside the line number where an error was found. Clicking on this button displays the error. Clicking on it again hides the error. Fixing the error removes both the button and the error message. Our extension now highlights the wrong HTML code rather than underlining it. Lastly, the syntax is now checked when the document is changed (i.e. code is added, deleted, copied or pasted) as opposed to when a key is pressed, as it did before. This makes the extension more efficient. For instance, the syntax won’t be checked when the cursor keys are pressed like it did before.

Sample week 2Sample showing the red button beside the line number, the highlighted text and the error.

by Klever Loza Vega at January 20, 2015 04:39 AM

January 19, 2015

David Humphrey

Embedding license data in images

This term I've got a good group of students taking my second open source course. Unlike the first course, which aims to teach the theory and practice of open source, the second course aims to get students engaged with a larger piece of work.

During the fall I was chatting with Ryan Merkley about some of the technical problems faced by Creative Commons. One that seemed like a good fit for the students was how to create a stronger bond between image files and a CC license. Because a CC licensed image is meant to move around the web, and get used in new contexts, it is difficult to have license data live in a web page stay connected with an image over time. I know I'm guilty of just right-clicking images and copy/pasting or saving.

What would be interesting is if the license information could ride along with the image itself. All that's needed is a simple way (i.e., a library or set of libraries) for people to read and write that license data into an arbitrary image file: maybe this happens in a camera app on Android, or maybe it's used as a web service (i.e., you upload an image, pick a license, and you get back your original image stamped with the license data internally); maybe it finds its way into browsers some day, and you can readily get at the license info just by interacting with the image in a page.

It turns out that all of the major image formats support embedded textual data, which decoders are free to ignore (and usually do): the PNG spec allows for Textual Chunks, which are key-value pairs of plain or compressed text; the JPEG spec (pdf) allows for Comment and Application Data; and GIFs allow Comments.

The goal of the project will be to create an open source library, or set of libraries, that can be used in desktop and mobile environments, and give developers a simple way to encode and decode license information from images. A number of attempts at holistic image metadata (pdf) embedding have been discussed and developed in the past. This project will be informed by these recommendations, but will focus on licensing information vs. the broad spectrum of image metadata that is possible. Over time the scope of such a library could expand to include other digital works beyond just images.

It will be interesting to see the decisions the students take in terms of implementation, since a goal of the project is that such a library be useful across mobile platforms. Should it be written in Java and then automatically converted to Objective-C and JS using j2objc and gwt as Google would, or write everything in C++ and then wrap for each platform?

There are numerous interesting theoretical and technical problems to be solved, and I look forward to digging into some of them with the students. I'll write more as we progress toward a solution.

by David Humphrey at January 19, 2015 07:05 PM

Alfred Tsang

Analysis between two contradicting softwares

OpenCV – This is a library that has a BSD License.

Patches in OpenCV are submitted through GitHub.  Basically, there is a tester who tests the code that was submitted to see if it works.  If it does, the code goes through to be reviewed.  The person viewing the code will see if the code is useful or not.  The reviewer will ask the person doing it to improve the code.  The code goes through the BuildBot to see if it works.  The reviewer will then repeat this process if necessary until the code is either merged or rejected.

If the code does not work, the programmer has 2 weeks to fix the code.  If the code is not fixed in two weeks, it is rejected.  If the code is fixed before the two weeks are over, a ticket will be created.  The purpose of the ticket is to address other programmers about the issue.

Another way that code can be rejected is the person looking at the code and sees that it is garbage and discards it completely.

SQL-Ledger- This is an accounting system that has a GNU General Public License.

Patches in SQL-Ledger are submitted through an online form called contrib. In the form, one must tell what version the patch is for.

In OpenCV, the advantage is that code is checked to make sure that it is up to standards.  There are three stages working together to add the code to the project.  The disadvantage is the submission of code may take a while.

In SQL-Ledger, the advantage is that code is submitted through a form and no external program needed.  The disadvantage is the code does not have a lot of feedback. Another thing wrong with this is that any number of people can work on the same problem and not know who’s working on what.

In OpenCV I would have to know GitHub’s push and pull commands.  In SQL-Ledger I have to know how to use the form to submit, as well as a patch file.

by kaputsky263 at January 19, 2015 03:17 PM

Andrew Benner

First Project — Week 2 January 12-16, 2015

Week two has finished and we had lots of work to do. I completed the code that gives the line number where the error begins and the line number where the error ends. After I had completed that portion, I had to determine how many characters from the beginning of the line the error starts and ends.

Once the code was completed, we were able to join our group code together. One of our team members was able to code enough of a user interface for us to test the extensions functionality. At this point, we determined that our extension is functioning, but contains bugs. Certain html errors were rendered correctly, but not all. I worked on these bugs to make each html error render correctly. Also, I worked on making the error messages a little more friendly and informative.

Right now we think the Slowparse section of the project is completed, so I familiarized myself with the code base involved with the user interface so I could help close the current issues. Once I was more familiar with the code, I was able to change the error notification button and alter the error highlighting. Instead of the error being underlined, the error is now highlighted with a background of red.

The next challenge I encountered was altering the “gutter”, which is the portion of the screen that contains the line numbers in the text editor and the error notification button. The issue with the gutter was that the line number the error was on didn’t show. I was able to resolve the issue so when the error button is present the line number also shows right beside the button.

In addition to my direct work on the Brackets extension, the more experienced members of the team helped me with Github workflow. They really helped me to understand the typical workflow and how I should be implementing my code with a clean, professional approach. They also taught me about the process of opening and closing issues on Github. It really helped me to further understand the workflow and how to keep everyone on the project up to date with work that needs to be done and who has completed what work.

Week two was exciting and I learned even more than the previous week. I’m looking forward to finishing up the extension and to see what week 3 brings.

by ajdbenner at January 19, 2015 02:14 PM

Kenny Nguyen

Blog Restart

I've had this blog up for about half a year now, but it was woefully unprofessional so I decided to unpublish everything, and start anew.

So to start off, a 2 week update.

So its been 2 weeks since I started working at CDOT full time and I'm honestly enjoying myself so far. The hardest thing I had to deal with week one was adhering to the same schedule everyday. The last time I had to do that was in highschool and its been 5ish years since I left(maybe 6, I'm terrible with remembering lengths of time).

We've mostly been working on our brackets extension so far, and I'm in charge of making it pretty, so far I've been somewhat successful, but in other ways not really that successful. I find I tend to play support more than I play entry fragger (video game terms, is that unprofessional?). I'm slowly trying to figure out how to get out of my shell of trying to help people all the time and focus on what I need to get done.

Well thats the end of this update for now, I randomly decided to update my blog 3am in the morning on a monday, still trying to get my sleep schedule in order, and failing. Hopefully I fix this tendency eventually.

This is Kenny Nguyen, signing off~

P.S. - My tag on IRC is morri

by Kenny Nguyen at January 19, 2015 08:25 AM

January 18, 2015

Bradly Hoover

Contributing to open-source

As I stated in my first post, I have yet to contribute to an open source project.


One of the classes I am taking this semester requires participation in the open source community: contributing to projects, blogging about it, ensuring I follow the rules of the open source world. I see this as a very positive thing as it will push me to do something I have been meaning to do for a while.

Yes, that is partially why this blog is up and running. It is required for the class. Do I like it? No, I dislike the idea of blogging. Not the medium, nor the content, nor the fact that my words and my work are going to be out there for everyone to see. I frankly love sharing my thoughts and ideas in the hope of educating people. I dislike blogging because I do not have the discipline to write consistently. I would hate the idea of blogging, having content out there for people to see, then not write for 3 months. It would feel as a failure on my part, something I did not complete. But, I digress….

“How can I contribute to an open source project” is a very popular question on the subreddit /r/learnprogramming. So much so it is in the faq. There is a lot of confusion or uncertainty it seems in the process of contributing to a project. The process varies depending on the project you have chosen to offer your time to. I am going to take two projects, VLC and CAKEPHP and I am going to look at the process for submitting code for bug fixing. I’m not going to cover how to get the source code or what you have to do to learn about the inner workings of the code. You can do that on your own for each product.


VLC is a hugely popular and hugely successful open source media player. I have used it for many years and it has never failed me to play a video regardless of the format. VLC is distributed under the GNU GPLv2 license and as such, you can modify it as per the terms of the license.

Code submission for VLC was unclear at first. It is not described very well on the wiki. There is no guide stating what you are required to do to work on a bug so I had to go hunting around and take a guess at some parts.

For bug tracking, the VLC team uses a website where a user or developer must sign up in order to submit a bug ticket or a patch. Once the bug is submitted, it seems that it is the responsibility of a volunteer to take it upon themselves to fix the bug. There does not seem to be a way to claim a bug to make it exclusively yours. One must simple pick a bug, work on a fix, and hope someone does not fix the same bug first. When the volunteer developer has completed the bug fix, they submit the patch in the ticket. Again, there seems to be no way to flag a ticket as containing a patch. The ticket system does let you search for tickets that contain patches. I assume that every once in a while someone from the VLC team goes through the tickets and looks at the tickets with patches, and reviews the code as there is no indication of anything else that takes place. I did see some other posters comment on the patches submitted by others.
A dedicated bug tracking system is an excellent tool for tracking known problems. At first, I though the VLC bug tracking system was missing is the option to claim or to assign a ticket to an individual. If this were in place, it would allow the team to track who is working on what ticket so that work is not duplicated. After further thought I understand why there is no way to claim a bug. What may happen is someone with good intentions claims a ticket, but for whatever reason they have to stop working on it. Eventually this bug might go unfixed for a long time until it is released for someone else to claim. The way they have it now makes sense. As for code review, there was no clear indication of how long it would be before a patch was implemented


CakePHP is a PHP framework. It follows the MVC principles that allow for a rapid deployment of websites. It is currently licensed under the MIT License. It is very similar to the GNU GPL license.

CakePHP outlines a very clear process for contributing to the code through bug fixes. First you search through the tickets. When you find a bug you want to work on you fork the repo branch with name of the branch including the ticket number so that others know you are working on a specific ticket. When you think you are done the fix, CakePHP offers some tools and testers to make sure you have not totally broken it. After those tests pass, you push those changes to the topic branch in your fork. Once those changes are pushed you then create a pull request for your bug fix.

The time it takes for your code to be reviewed varies. I found most tickets had been closed only after a couple of days. This indicates that the core CakePHP team is quite active.

This process is very similar to the process that the VLC team uses except that there is a clear guide and explanation as to the process of bug fixing.



Personally, I have a lot of experience with technology. For me to figure out how to go about finding and submitting a bug fix for one of these, or any open source project would not be a difficult task. I don’t think it would be for anyone as most of the information is easily accessible. I believe the most difficult part of this process would be learning to navigate the code of the actual project. The structure of each could be drastically different. I can personally see myself having the most problems then. I would want an understanding of the working of the project as a whole when the best thing to do would be to break down the project into what you only need to know for the area you are working on.

I realize I am going to have to be able to focus myself on one thing at a time. It’s not that hard, is it?



The post Contributing to open-source appeared first on

by Brad at January 18, 2015 01:29 AM

January 17, 2015

Maxwell LeFevre

Open-Source Projects (Lab 1)

In our first lab for SPO600 we have been asked to select two pieces of open-source software with different licences and collect info on how to become involved in their development. The two pieces of open-source software I have chosen to look at for the first lab are LibreOffice and OpenCV.


Licence: Dual-licence GNU Lesser General Public License v3 / Mozilla Public Licence v2.
IRC Channel: #libreoffice-dev

LibreOffice is an open-source suite of cross platform office applications whose development is done by the open-source community and is over seen by The Document Foundation.

Code Submission:
Detailed instruction on how to contribute a patch for LibreOffice can be found at The gist of code submission is as follows:

1. Set up Gerrit, a web based code review application for Git projects, on your system.
2. Add your information to the list of contributors and developers
3. Check out the source code from the Git repository.
4. Make your desired changes.
5. Commit and then Push your changes back to the repository where they will wait for review.

Specific Ticket Tracked:
Duration: 1.5 hours

This ticket involved improving a PostgreSQL statement in one of the PostgreSQL drivers. All communication between the three people involved in the ticket (author, committer/code reviewer, and second code reviewer) happened through comments in the gerrit code review system. The whole process after the code was submitted was completed in 1.5 hours. There were no issues, just a request for conformation that the class was correct, which was given. All parties involved responded within an hour to requests made by others. After reviewing a number of other tickets I found that most of them were closed within days of the code being submitted.

Other Info:
Open tickets can be found at,n,z.


Licence: BSD 3-Clause
IRC Channel: #opencv (unofficial)

OpenCV, Open Source Computer Vision, is a library for dealing with realtime graphics in C++, C, Python, Java and MATLAB across a variety of platforms. It is managed by the OpenCV foundation.

Code Submission
Detailed instruction on how to contribute a patch for OpenCV can be found at The gist of code submission is as follows:

1. Install git, check out a branch from github, and clone it to your own branch.
2. Make your changes, test it, and push it back.
3. “Create a pull request from your branch to the base branch” (From the openCV website)
4. Your changes will be tested buy a buildbot. If it passes an openCV developer will review it and suggest changes.
5. When the reviewer approves your changes they will be merged with the main branch.

Specific Ticket Tracked
Duration: 3 days

This ticket was to fix a specific bug that causes a segmentation fault in the program. The two people involved (author and code reviewer) communicated on the github page for the specific pull request ( The fix took about 2 days from the time the code was posted until it was accepted and merged into the main branch. The reviewer felt that the original code, though it solved the problem, was too slow to implement and requested that some changes be made. The changes the reviewer suggested broke the code again so they both discussed other options. Eventually they isolated the root cause, which turned out to be a memory allocation issue, and fixed it. Multiple messages were exchanged each day so although they were not time stamped I can assume that the response time from both parties can be measured in hours, not days. The majority of the tickets I looked at followed a similar pattern, communication on the github page for the specific pull and a resolution within a couple of days.

Other Info:
Open tickets can be found at


While researching these two pieces of software I found that information on how to contribute to OpenCV was disseminated in a much more detailed and transparent way with details on what to expect throughout the process. Where as LibreOffice’s documentation was a little harder to get at and lacking in information on what happens after you submit your code for review. I also looked at a number of other open-source projects and the general trend I noticed was that the larger the project the harder they make it to become a contributor and get information on how to get started. In a way I guess this makes sense because they probably have a much larger number of people trying to make contributions and it would take longer to process the greater volume.

by maxwelllefevre at January 17, 2015 07:19 AM

January 16, 2015

Maxwell LeFevre

Benchmarking gzip

This will be my first post for SPO600 and, as requested, it contains the results of my attempt at benchmarking the gzip application on my computer.

System Details

OS: OS X version 10.10.1
Processor: 1.3 GHz Intel Core i5
Memory: 4 GB 1600 MHz DDR3
Graphics Card: Intel HD Graphics 5000 1536 MB


Before starting the test I went online and downloaded a variety of different file types to be used in the test that covered most of the standard files a user would want to compress. They included applications, images, games, text, an OS, a database file, and a few others. The files can be found here. I placed all the files in to a single folder and closed all open applications and stopped as many background applications as I could. Then, using Terminal, I compressed and decompressed the entire folder a few times to prepare the cache as suggested. Finally, using the UNIX command ‘time tar -zcf archive.tar.gz tocompressTest’ I compressed the entire folder. -z argument tells tar to use gzip to compress all the files in a folder recursively, -c tells it to create a new archive, and -f tells it to use the specified file name. I decompressed the file with ‘time tar -zxf archive.tar.gz’, -x means extract. I did this 7 times to gather enough data points to get an accurate estimation of the duration of compression and decompression.


File Info

Uncompressed Size Compressed Size Ratio
Image 4,965,247,752 bytes 2,027,172,987 bytes 59%

Test Run Details

Compression Duration Decompression Duration
5m40.486s 38.477s
5m37.073s 38.251s
5m37.876s 38.375s
5m38.961s 38.441s
5m40.258s 37.916s
5m38.302s 37.651s
5m37.401s 38.302s

The average duration for compression was 5 minutes and 38.622 seconds at a rate of 14.661 MB/s. Decompression took an average of 38.202 seconds, a rate of 129.73 MB/s.


As expected, decompression is much faster than compression. The results I achieved were fairly consistent, only a few values were notably different. In compression the difference between the fastest and slowest times was only 3.413 seconds, a difference of 1%. In decompression only 0.826 seconds separated fastest from slowest, 2% of the total time. I think that these variances can be accredited to background tasks that I was unable to stop. I also tried running compression while monitoring CPU usage and noticed that tar was single threaded the whole time so it could, potentially, be improved through implementation of multithreading to take advantage of multiple cores on modern processors.

by maxwelllefevre at January 16, 2015 07:57 PM

Gabriel Castro

Starting work

This week I started work at CDOT. I will be working as part of a team on the Brakers project, it is a mobile app that will warn motorists of an incoming emergency vehicle. The project will initially be for android devices; this is the part that I am working on, there is also a server component that is being written in python by my coworkers.

by Gabriel Castro ( at January 16, 2015 03:46 PM

January 15, 2015

Jan Ona

Starting SPO600

Just started the SPO600 course in Seneca, with stands for Software Portability and optimization. Not sure how I will be able to handle the course, but I’ll try!

by jangabrielona at January 15, 2015 06:53 PM

Nicolas Ramkay

Testing Planet CDOT feed.

This is just a test post for the feed used on planet CDot.

anything tagged with the ‘open-source’ tag.

Fingers crossed!

by nickramkay at January 15, 2015 05:28 PM

January 14, 2015

Hosung Hwang

CordovaStabilizer – Crosswalk in Blackberry 10

Conclusion is “Crosswalk doesn’t work in Blackberry 10″.

Blackberry 10 support installing apk file.

I sent sample crosswalk apk file by email and tried to install.

Screenshot from 2015-01-14 17:31:19

Screenshot from 2015-01-14 17:32:40

Screenshot from 2015-01-14 17:37:55

It seemed installed successfully; but, when I ran it, shows CPU architecture mismatch error.

The apk I installed was ARM version, so I made intel version crosswalk app.

Intel version apk size is slightly bigger.

Screenshot from 2015-01-14 17:48:42

Result was the same : CPU architecture mismatch error.

Crosswalk is basically for Android and Tizen.
For Blackberry, Cordova with stock webview will be the solution.

Next step : testing Cordova in Blackberry.

by Hosung at January 14, 2015 11:28 PM

CORDOVASTABILIZER – Blackberry 10 simulator for Linux

To test corsswalk and cordova app in Blackberry 10, I installed Blackberry simulator on Linux (Ubuntu 14.04 LTS)

1. Installing the simulator

When download setup file from this URL for Linux and setup, in ~/VMImages/BlackBerry10Simulator-BB10_3_1-995 folder, there are bunch of .vmdk and .vmx files including BlackBerry10Simulator.vmx. These files are VMWare image files.

2. Installing VMWare Player

3. Run the simulator through VMWare Player
Run VMWare Player and open BlackBerry10Simulator.vmx and run.

Screenshot from 2015-01-14 16:16:38

4. Connect the controller to the simulator
Run controllers/controller in installed directory using terminal

hosung@hosung-Spectre:~/VMImages/BlackBerry10Simulator-BB10_3_1-995$ controllers/controller 

Screenshot from 2015-01-13 17:41:46

Using the controller, can simulate getting phone calls and various things like above.

by Hosung at January 14, 2015 09:27 PM

Klever Loza Vega

And so it begins…

Week 1
I’ve just finished my first week at CDOT and so far it’s been great. My two colleges and I were given our first task for the term: make a Brackets extension that will check the syntax of HTML entered in the editor and display any errors, if any. This is similar to what Thimble already does.


Thimble is text editor, created by Mozilla, where you can write, edit, and preview HTML code all within a browser. It also has the capability to publish and share webpages you’ve created in it. Thimble automatically highlights any HTML that is written wrong or is incomplete. It does this using an HTML parser called slowparse. Our project involves trying to bring over that functionality to brackets.


Brackets is an open source text editor built by Adobe. It’s essentially a web application as it’s written in HTML, CSS, and JavaScript. Brackets has a neat feature that allows you to Live Preview your code on your browser as you write it. There’s no need to refresh the browser every time you make a change to your code.

Hacking Brackets and Extensions

Since Brackets is open source, we have many advantages. For instance, we not only have access to Brackets’ source code but we can also make changes to it. Brackets also supports extensions. There are many extensions that are already made. If there isn’t an extension that suits your needs, you can just create one yourself. Brackets makes it really easy to make your own extension. There are many other guides out there that help with making your own extensions. However, one guide that has been very useful to me has been Writing Brackets extension – Part 1 and Part 2. Part 2 was particularly helpful as it deals with tracking events in Brackets; something that I need for the project I’m working on.

The Project

My part of the project deals with getting errors underlined and tracking changes made to the document via keyboard events. So far, I’ve found a way to both underline incorrect text in red and remove it. I also have the keyboard event working. That is, the extension checks, using slowparse, the document syntax every time a key is pressed. Since Brackets can edit different file types, I’ve added code to check syntax only on documents that have the .html extension. So, if a JavaScript file is loaded on Brackets it won’t try and check the syntax for that file.

Screen Shot 2015-01-13 at 10.30.41 PM


Early on I found myself stuck trying to understand how the Brackets API and CodeMirror work – CodeMirror is the text editor that Brackets uses. There was a lot of documentation to read and it was difficult to figure out what actually applied to my problem and what didn’t. I was having problems getting the text to underline. There was a lot of trial and error with different functions and configurations until finally, largely thanks to a longer-serving colleague, it worked!

Here’s the code that underlines text that worked for me:

/*jslint vars: true, plusplus: true, devel: true, nomen: true,
  regexp: true, indent: 4, maxerr: 50 */
/*global define, CodeMirror, brackets, less, $, document */

define(function (require, exports, module) {
  &amp;quot;use strict&amp;quot;;

  var EditorManager = brackets.getModule(&amp;quot;editor/EditorManager&amp;quot;),
     ExtensionUtils = brackets.getModule(&amp;quot;utils/ExtensionUtils&amp;quot;);

  ExtensionUtils.loadStyleSheet(module, &amp;quot;main.less&amp;quot;);

  //Function that underlines the given lines
  function markErrors(lineStart, lineEnd, charStart, charEnd) {
     var editor = EditorManager.getFocusedEditor();

  var marked = editor._codeMirror.markText({line: lineStart, ch: charStart},
               {line: lineEnd, ch: charEnd}, {className: &amp;quot;error-highlight&amp;quot;});


What I Learned

Ask for help! I was so caught up trying to solve my problem that I didn’t realize all the time that had passed. Time that could have been put towards other things had I asked for help earlier on.
The way we solved my text underlining problem was by using the Chrome Developer Tools integrated in Brackets – you can get there by clicking on the Debug tab and clicking on Show Developer Tools. I learned a couple of things using the developer tools:

  • First you can use it to debug your code by putting the console.log() function in your code and seeing the result on the console. This can be useful to track the order that your code is executed and if it goes into a certain function or not.
  • Secondly, if you console.log an object you can actually see, by clicking on a small tab in the console, all the attributes and functions associated with that object.
  • Lastly, just because the console shows an error doesn’t, necessarily, mean that your code won’t continue to run! The error could be occurring at and earlier point than, say, the function you’re working on. There were often times when I saw the error and spent some time trying to fix it, only to find out it had nothing to do with the function I was working on! The error was completely unrelated to what I was working on.

Apart from this, I’ve also become more familiar with Git and GitHub. More importantly, I’ve learned the importance of proper planning and having a good work flow – more on this next week.

Next Week

My colleagues and I hope to finish the extension by the end of week 2. We’re also hoping to have a GUI that displays the type of error found in the code, similar to Thimble.

by Klever Loza Vega at January 14, 2015 05:57 AM

Hong Zhan Huang

Starting a new game (course) and it’s called SPO600

Software Portability and Optimization 600. I should have played all the prequels first. Well for better or worse I am now a SPO600player. New game start!

by hzhuang3 at January 14, 2015 03:56 AM

January 13, 2015

Neil Guzman

CDOT demo #1

Looks like a successful first demo for CDOT (Seneca Centre for Development of Open Technology). Apparently, our team, BRAKERS (main site located here), was closer to being a presentation than a demo.

To recap, our "presentation" reintroduced the BRAKERS team to the rest of CDOT, our progress, and problems we have encountered.

Our team is building an application to warn motorists of approaching emergency responder vehicles to help minimize crash rates. Our team is also divided into a mobile side and a server side--myself being a part of the server side. We have decided that our server will be using the Tornado framework for its asynchronous networking library (as briefly discussed previously)

On the server side, we have mainly researched the possible tools and frameworks we could use (Erlang, Twisted, Go, etc.) and have also read about certain heuristics to help in choosing the I/O strategy. We looked at benchmarks, python frameworks, and other resources to help decide on what to use. In the end, we chose the Tornado framework because:

  1. It is fast, has a low error rate, and can handle thousands of connections

  1. It is based on Python, which is simple to use and is supported with many libraries and documentation

  1. We need to build a working "final" prototype quickly

The speed of prototyping is, I think, the most important of the criteria used in choosing the tools; therefore the reason we chose Python is for its quick prototyping capabilities and available resources.

Aside from research, I have gotten a simple chat server working with SSL to familiarize myself with the tools at hand. We also encountered a situation of sending unsigned data in Java, but I think Python struct should fix that.

by nbguzman at January 13, 2015 09:49 PM

Christopher Markieta

CDOT Winter 2015 Initiation

Welcome Back

It has been quite a while since my last blog post, but I promise to submit entries more frequently from now on.

It is the start of a new semester here at Seneca, and I am continuing my 6th semester of the Computer Programming and Analysis program, and my 5th term as a part-time research assistant (RA) at the Centre for Development of Open Technology (CDOT).

This season is quite exciting for some of us as we are springing into many diverse projects with new RAs on-board.


My team is currently working on implementing a Python asynchronous networking server and developing an Android app for BRAKERS Early Warning Systems INC.

Python Networking Server

Neil and I have been researching and benchmarking different web frameworks to be used on the networking server for the Android app. We are considering Tornado, which is a fork of the famous Twisted framework. Tornado is a Python web server as well as an asynchronous networking library, which will allow us to work on the backend.

Initial Android Application

My other group members are developing an Android application to be ready for real-world testing in February.

Weekly Demos and Presentations

Today we will begin our weekly demos and presentations, where individuals or groups within CDOT will briefly demonstrate what they have been working on over the past week or so. Since this is our 2nd week after the holidays, I will be giving a recap of our project and what our team has been working on thus far.

by Christopher Markieta ( at January 13, 2015 09:27 PM

Yan Song

Hello SPD600!

Now is the time for all good men to come to the aid of the party.

It’s not just filler text; it’s also open source.

by ysong55 at January 13, 2015 07:57 PM

January 12, 2015

Lukas Blakk (lsblakk)

Contribution opportunity: Early Feedback Community Release Manager

Did you make a New Year’s resolution to build out your software development skillset with a focused contribution to an open source project? Do you want to work with a small team where you can have a big impact?

We’re once again looking for someone committed to learning the deepest, darkest secrets of release management when shipping a big, open source software project to half a billion users. Our small team of 3 employees already has one contributor who has been a consistent volunteer for Release Management since 2013 and has worked his way from these tasks to taking on coordination and growth strategy for the Extended Support releases. Our commitment to contributors is that we will do our best to keep you engaged and learning, this is not grunt work, it’s deep participation that matters to the organization and to the products we ship.

You’ll need to consistently commit a 1-3 hours a week to helping analyze our Nightly channel (aka mozilla-central or ‘trunk’) and raise issues that need prompt attention. The very fabulous volunteer who takes on this task will get mentoring on tools, process, and build up awareness around risks in shipping software, starting at the earliest stage in development. On our Nightly/trunk channel there can be over 3000 changes in a 6 week development cycle and you’d be the primary person calling out potentially critical issues so they are less likely to cause pain to the user-facing release channels with larger audiences.

A long time back, in a post about developing community IT positions, mrz recalled a post where I stated that to have successful integration of community volunteers with paid staff in an organization there has to be time dedicated to working with that community member that is included in an employees hours so that the experience can be positive for both parties. It can’t just be “off the side of the desk” for the employee because that creates the risk of burnout which can lead to communication irregularities with the volunteer and make them feel unappreciated. For this community release manager position I will dedicate 1-3 hours each week to actively on-board and guide this community Release Manager contributor in order to ensure they get the skills needed while we get the quality improvements in our product.

Here is the “official” call for help, come get in on the excitement with us!


  • Are familiar and interested in distributed development tools (version control, bug tracker) typically used in an open source project of size (remember when I said half a billion users? Ya, it’s not a small code base)
  • Want to learn (or already know) how to identify critical issues in a pool of bugs filed against a code base that branches every 6 weeks
  • Have worked in open source, or are extremely enthusiastic about learning how to do things in the open with a very diverse, global community of passionate contributors
  • Can demonstrate facility with public communications (do you blog, tweet, have a presence online with an audience?)
  • Will be part of the team that drives what goes in to final Firefox releases
  • Learn to coordinate across functional teams (security, support, engineering, quality assurance, marketing, localization)
  • Have an opportunity to develop tools & work with us to improve existing release processes and build your portfolio/resume
  • Can commit to at least 6 months (longer is even better) of regular participation – this will benefit you by giving you time to really get hands-on experience and understanding of release cycles


  • Mentor and guide your learning in how to ship a massive, open source software project under a brand that’s comparable to major for-profit technology companies (read: we’re competitive but we’re doing it for a mission-driven org)
  • Teach you how to triage bugs and work with engineers to uncover issues and develop your intuition and decision making skills when weighing security/stability concerns with what’s best for our users
  • On-site time with Mozillians: select attendance at team & company work weeks – access to engineers, project managers, and other functional teams – get real world experience in how to work cross-functionally
  • Provide work references about how awesome you are, various swag, and sometimes cupcakes :)

I’ll be posting this around and looking to chat with people either in person (if you’re in the Bay Area) or over video chat. The best part is you can be anywhere in the world – we can figure out how to work a schedule that ensures you get the guidance and mentoring you’re looking for.  Reach out to me in IRC (lsblakk), on Twitter (@lsblakk) or email (lsblakk at

Look forward to hearing from you! Let’s roll up our sleeves and make Firefox even better for our users!

by Lukas at January 12, 2015 11:07 PM

Andrew Benner

First Project — Week 1 January 5-9, 2015

This week is the beginning of my coop work term at Seneca’s Centre for Development of Open Technology (CDOT). I began Monday for orientation, which allowed me to learn about the other projects in CDOT. After orientation, our team was given our first project; create an extension for the text editor Brackets. The extension will use Slowparse to give the user real-time feedback on the accuracy of HTML5 code typed in the text editor.

Our team determined that there were three main parts to making this extension. We would need to create a user interface, we had to learn how to make an extension for Brackets, and then we had to incorporate Slowparse. My task was to learn how Slowparse works and figure out how we’ll be able to use it in our extension.

Slowparse can be installed by simply running the npm install Slowparse command using the terminal. Once installed, you can include Slowparse using the require function. Example: var Slowparse = require(“slowparse”); To use Slowparse you call its .HTML function. Example: var result = Slowparse.HTML(document, ‘… html source here …’, options); The function takes a DOM context as the first argument, and HTML5 source code as the second. The third argument is optional. I determined that we wouldn’t need the optional third argument for our extension so I wont explain them, but if you want to read about them they can be found on the Slowparse github page linked above in the document under the using Slowparse heading. If Slowparse yields a result without the .error property, the source code can be considered valid HTML5. If you have an error you can console.log(result.error) and notice that Slowparse returns the error as an object. Once I had finished reading up on Slowparse, I began to experiment.

I first tried to use Slowparse in node, but ran into troubles since there is no DOM and the first argument of the Slowparse function requires DOM context. I did a little searching around and found DOM builders that I could use, but decided that might be getting a bit too deep for what I was trying to do. After, I tried to just run Slowparse using javascript, but javascript doesn’t have a require function to include the Slowparse package. Since I wasn’t able to just plug Slowparse into node or javascript to test, I decided to just dive right in and start testing using the Brackets API. The Brackets API includes a DOM builder already, so that was an aspect I didn’t have to worry about and I could begin testing.

After some testing, we determined that we couldn’t just return the Slowparse error object because it wasn’t very human-friendly to read and understand. Luckily, Slowparse includes a JSON object that contains all the errors in a more human-readable HTML form. All I had to do was include the JSON object and index it properly based on the Slowparse error object that was returned. After some discussion with my teammates, I found out that to highlight the error in the text editor we would need to know the start and end of the error. I had to update my code to include the start and end, which were also found in the Slowparse object returned.

Currently, I’m still working on updating the code to return the accurate start and end of each error. The way the Slowparse object is returned seems to change based on the error, so I have to go through each error to make sure it’s indexed properly. So far, it has been a great first week for me. I’ve learned a lot and interact well with my teammates. I look forward to my next week of work and can’t wait to get this extension working.

by ajdbenner at January 12, 2015 03:55 PM

Tai Nguyen

Passing On Failed Issues and Using GitHub as a Learning Tool

This post is related to my previous post where I talked about an issue I passed on to another person because I was unable to solve it. A point that I forgot to mention was that just because you may fail in solving an issue, doesn’t mean you can’t get anything from it. One of the things you can get from it, is insights. In other words, you can always learn from another person who has solved the issue that you couldn’t, by observing the way they had implemented their solution. For instance, in my previous post I talked about how I was trying to implement an issue where I had to align all the components in the navigation bar. My attempt failed because I was trying to solve the issue using a series of hard techniques, little did I know that the web technologies had progressed to the point where the web community has implemented an easier way to do this with the introduction of CSS flexbox mechanics. After, looking at the other person’s solution, who had solve the issue with flexbox (something i didn’t know before), I was astonished to find out there was such a way of doing things.

I took it too another level. I noticed that the person who had solve my issue, didn’t fully implement the issue. Cause the same problem existed in other areas of the project (I assumed he didn’t want to go through it all). However, it doesn’t matter that he didn’t fix everything, all we needed was the solution for the fix, which we could use to apply to the same existing problem in the other areas. This is where I came in. Using his solution, I made changes to the other files and in addition added some of my stuff that I thought would help benefit the project.

Here’s my PR:

by droxxes at January 12, 2015 03:28 PM

January 10, 2015

Andrew Smith

Your password is too… hard to break

I figured at some point after heartbleed (after sites had time to get themselves patched) I should change all my passwords for valuable services. I’m doing that now and I was shocked by a couple where it wouldn’t let me change my password because the new one was too complex :) The last time I was hit with an error like that was when I wanted a longer-than-4-digit-pin on my credit/debit cards. (probably the same on helpfully says:

Your new password must: not contain any spaces, symbols or characters with accents.


You have entered an invalid password. Please re-enter your password, using only alpha-numeric characters.

Don’t get me wrong, I am not a fan of retarded password policies (e.g. must have two special characters, uppercase, numbers) but I feel it’s even more retarded to prevent people from using those characters if they want to.

What this tells me is – almost certainly these services store my password in plain text in some ancient IBM maniframe database column that is incapable if storing anything other than letters and digits. Shame on them!

by Andrew Smith at January 10, 2015 11:03 PM