Planet CDOT

December 20, 2014

Hosung Hwang

CordovaStabilizer – Research Crosswalk 2

Thought about Crosswalk project

While testing Crosswalk, I found that this project is a part of strategy to promote Intel Inside Android phone by making it easy to build apk for intel x86 processor with a significant benefit: hybrid apps that embed new chromium WebView. For Android app developers, testing in various devices and android versions are very annoying work. Furthermore, recently, they have to build and submit an apk to the Play Store for another CPU that only few phones use. Crosswalk automates to build apks for both ARM and Intel, and Chromium WebView is pretty attractive to developers. Also, it supports Tizen(made by Samsung and Intel). Therefore, I guess Intel will maintain and promote this project, as a newbie in smartphone chip, to attract developers and hardware makers.

Reference :

Android on Intel Platforms
Submitting Multiple Crosswalk APKs to the Google Play Store
ARM vs. Intel: What It Means for Windows, Chromebook, and Android Software Compatibility
Smartphones with Intel Inside

Crosswalk Source Code and Building

I followed this instruction :
But I think their script has some errors.

I compared the source code from Chromium source code.
The significant difference was that they added ozon and xwalk.
ozon is a kind of x-windows system, I guess it is for supporting linux desktop. According to the documents, Cordova does not support Linux desktop. However, Crosswalk support. (I didn’t test it yet)
xwalk is newly added source codes and build environment.

Crosswalk Test

Today, I tested multiple crosswalk application. They worked well.
And I built them in release version and compared their size of apk.

Simple Cordova App

build : cordova build --release
size : 340 Bytes

The same Cordova App in Crosswalk

build : ./cordova build --release
        (build script written in node.js inside cordova directory)
size : 18 Mega Bytes

Crosswalk apk that bundles WebView is 18MB. It takes time to download to the device and install.

Shared vs. embedded mode

Embedded mode is what I tested before. It bundles WebView.
Shared mode is share a Crosswalk runtime as a shared WebView, and apk is very small.
This page (Running a Crosswalk app on Android) shows how to make shared mode.
It needs another tools from .
Doesn’t work properly. Let’s try next week.

Next Step

  • Test Shared mode
  • Make a Shared mode for multiple apps for a company
  • Make an environment for checking version of Shared Webview and Updating

by Hosung at December 20, 2014 12:12 AM

December 17, 2014

Hosung Hwang

CordovaStabilizer – Research Crosswalk

From my yesterday’s posting‘s link to Joe Bowser’s blog (co-founder of Cordova), Andrew found an interesting project called Crosswalk.

According to the description, Crosswalk was exactly what this CordovaStabilizer project was trying to do, and their architecture was almost the same as my plan.

I tried to make Crosswork-Cordova-Android app and checked if it really use chromium webview rather than system webview.

Make Crosswalk project and launch from this page

1. Download crosswalk-cordova-android bundle
$ wget

2. Create Sample app
crosswalk-cordova-$ ./create HelloWorld \
     org.crosswalkproject.sample HelloWorld

3. Build and run
$ cd HelloWorld
$ ./cordova/run

It works.

Change html code to redirect to test HTML5 features
I used page

1. Change this page : 

2. in the body tag, redirect to

3. run : ./cordova/run

Change normal Cordova project to open the same page

1. Change this page (what I made yesterday) :

2. in the body tag, redirect to

3. run : $ cordova run android

I tested it in Android 4.0.2 Icecream Sandwich phone.
In normal Cordova app, WebRTC features and many other features didn’t work.
However, in Crosswalk-cordova-android example, many HTML5 featurs and WebRTC features worked; the same as Chrome that installed in the phone.
WebGL features didn’t work, but the same example did not work even in chrome browser.
I guess this phone do not support Hardware acceleration.

Clearly, Crosswalk-cordova-android used chromium WebView rather than system webview in Android 4.0.2.

Source code of Crosswalk and build
This page says about how to build crosswalk android.
It is based on Chromium source code and Cordova. Looking at how they did it seems to be interesting.


by Hosung at December 17, 2014 10:40 PM

December 16, 2014

Hosung Hwang

CordovaStabilizer – Cordova Structure and WebBiew Interface Analysis

I started from the initial commit of Android PhoneGap Code in 2008 by Joe Bowser. It was very simple Android WebView project that opens remote web page, and give some functions such as vibrate, location, taking photo, and playing sound as interface from javascript. That is the core idea of Cordova.

There are two ways to make Cordova based Android app.

1. Using command line tools and HTML/Javascript

Summary of Apache Cordova Tutorial

1. Install Cordova tools : sudo npm install -g cordova
2. make project : sudo cordova create workshop com.yourname.workshop Workshop
   -> this produce following directory tree
├── config.xml
├── hooks
│   └──
├── platforms
├── plugins
└── www
    ├── css
    │   └── index.css
    ├── img
    │   └── logo.png
    ├── index.html
    └── js
        └── index.js
3. Add android platform : sudo cordova platforms add android
  -> this command generate skeleton android project inside platforms directory.
4. add basic plugins : 
    sudo cordova plugin add org.apache.cordova.device
    sudo cordova plugin add org.apache.cordova.console
5. change www folder to your own HTML5 app
6. build/deploy : cordova run android

This way doesn’t require eclipse for editing java source code. Because an Android project inside /workshop/platforms/android directory is automatically generated by cordova platforms add android, and it is prohibited to edit.

2. Using cordova-3.7.0-dev.jar file in the android project
cordova-3.7.0-dev.jar file is generated from the Cordova building process.
And according to the document from Cordova, they support to use custom WebView by implementing CordovaInterface.
-> How to embed a Cordova-enabled WebView component within a larger Android application.

Next Step
Look into Android WebView Shell test project and if possible, change it to a cordova app. If it is too complex, after understanding how to use webview libraries, make own cordova app.

by Hosung at December 16, 2014 10:02 PM

CordovaStabilizer – Chromium Android build process analysis

With gn and gyp, another build system chromium uses is ninja. It has similar form with make. (rule, target…)

In the Chromium Android building process, significant changes in build scripts happen when gclient runhooks runs. Before this step, no file is changed.

1. gclient runhook

An important directory for building Chromium Android WebView is /src/out/Release/obj directory. Before running gclient runhook command, there is no /src/out/Release/android_webview_apk directory.

When gclient is called, it generates ninja files based on targets that are written on /.gclient, /.gclient_entries, /chromium.gyp_env, and /src/DEPS file.


solutions = [
    "managed": False,
    "name": "src",
    "url": "",
    "custom_deps": {},
    "deps_file": ".DEPS.git",
    "safesync_url": "",
target_os = ["android"]


entries = {
  u'src': u'',
  'src/breakpad/src': '',
  'src/buildtools': '',
[... 95 entries ... ]


{ 'GYP_DEFINES': 'OS=android', }

Now in /src/out/Release/android_webview_apk directory, there are many .ninja files including (Android WebView for WebView Shell) and (Android System WebView apk) file. Meanwhile, /src/out/Release/ file contents is also changed.

2. ninja -C out/Release android_webview_apk
When ninja android_webview_apk runs, /src/out/Release/, file is called with “android_webview_apk” build target.
And inside, /src/out/Release/obj/android_webview/ is executed.


cc = $
cxx = $
ld = $cc
ldxx = $cxx
ar = ar
[... lots of compiler settings and options .. ]
subninja obj/android_webview/
[... many subninja ...]
build android_webview_apk: phony obj/android_webview/android_webview_apk.actions_rules_copies.stamp

subninja is including another ninja files.
build android_webview_apk is setting build target.

3. building android_webview_apk target

build obj/android_webview/android_webview_apk.actions_rules_copies.stamp: $
    stamp android_webview_apk/ $
    android_webview_apk/proguard.txt android_webview_apk/codegen.stamp $
    android_webview_apk/compile.stamp $
    android_webview_apk/android_webview_apk.javac.jar $
    android_webview_apk/instr.stamp $

This part and following dependencies seem to be the files for android webview package.

Next step
It will be possible to : add java file- change code – change DEP file and generate ninja file – build apk
connect it with cordova sample program after analysing webview shell program.

by Hosung at December 16, 2014 01:14 AM

December 15, 2014

David Humphrey

Video killed the radio star

One of the personal experiments I'm considering in 2015 is a conscious movement away from video-based participation in open source communities. There are a number of reasons, but the main one is that I have found the preference for "realtime," video-based communication media inevitably leads to ever narrowing circles of interaction, and eventually, exclusion.

I'll speak about Mozilla, since that's the community I know best, but I suspect a version of this is happening in other places as well. At some point in the past few years, Mozilla (the company) introduced a video conferencing system called Vidyo. It's pretty amazing. Vidyo makes it trivial to setup a virtual meeting with many people simultaneously or do a 1:1 call with just one person. I've spent hundreds of hours on Vidyo calls with Mozilla, and other than the usual complaints one could level against meetings in general, I've found them very productive and useful, especially being able to see and hear colleagues on the other side of the country or planet.

Vidyo is so effective that for many parts of the project, it has become the default way people interact. If I need to talk to you about a piece of code, for example, it would be faster if we both just hopped into Vidyo and spent 10 minutes hashing things out. And so we do. I'm guilty of this.

I'm talking about Vidyo above, but substitute Skype or Google Hangouts or or some cool WebRTC thing your friend is building on Github. Video conferencing isn't a negative technology, and provides some incredible benefits. I believe it's part of what allows Mozilla to be such a successful remote-friendly workplace (vs. project). I don't believe, however, that it strengthens open source communities in the same way.

It's possible on Vidyo to send an invitation URL to someone without an account (you need an account to use it, by the way). You have to be invited, though. Unlike irc, for example, there is no potential for lurking (I spent years learning about Mozilla code by lurking on irc in #developers). You're in or you're out, and people need to decide which it will be. Some people work around this by recording the calls and posting them online. The difficulty here is that doing so converts what was participation into performance--one can watch what happened, but not engage it, not join the conversation and therefore the decision making. And the more we use video, the more likely we are to have that be where we make decisions, further making it difficult for those not in the meeting to be part of the discussion.

Even knowing that decisions have been made becomes difficult in a world where those decisions aren't sticky, and go un-indexed. If we decided in a mailing list, bug, irc discussion, Github issue, etc. we could at least hope to go back and search for it. So too could interested members of the community, who may wish to follow along with what's happening, or look back later when the details around how the decision came to be become important.

I'll go further and suggest that in global, open projects, the idea that we can schedule a "call" with interested and affected parties is necessarily flawed. There is no time we can pick that has us all, in all timezones, able to participate. We shouldn't fool ourselves: such a communication paradigm is necessarily geographically rooted; it includes people here, even though it gives the impression that everyone and anyone could be here. They aren't. They can't be. The internet has already solved this problem by privileging asynchronous communication. Video is synchronous.

Not everything can or should be open and public. I've found that certain types of communication work really well over video, and we get into problems when we do too much over email, mailing lists, or bugs. For example, a conversation with a person that requires some degree of personal nuance. We waste a lot of time, and cause unnecessary hurt, when we always choose open, asynchronous, public communication media. Often scheduling an in person meeting, getting on the phone, or using video chat would allow us to break through a difficult impasse with another person.

But when all we're doing is meeting as a group to discuss something public, I think it's worth asking the question: why aren't we engaging in a more open way? Why aren't we making it possible for new and unexpected people to observe, join, and challenge us? It turns out it's a lot easier and faster to make decisions in a small group of people you've pre-chosen and invited; but we should consider what we give up in the name of efficiency, especially in terms of diversity and the possibility of community engagement.

When I first started bringing students into open source communities like Mozilla, I liked to tell them that what we were doing would be impossible with other large products and companies. Imagine showing up at the offices of Corp X and asking to be allowed to sit quietly in the back of the conference room while the engineers all met. Being able to take them right into the heart of a global project, uninvited, and armed only with a web browser, was a powerful statement; it says: "You don't need permission to be one of us."

I don't think that's as true as it used to be. You do need permission to be involved with video-only communities, where you literally have to be invited before taking part. Where most companies need to guard against leaks and breaches of many kinds, an open project/company needs to regularly audit to ensure that its process is porous enough for new things to get in from the outside, and for those on the inside to regularly encounter the public.

I don't know what the right balance is exactly, and as with most aspects of my life where I become unbalanced, the solution is to try swinging back in the other direction until I can find equilibrium. In 2015 I'm going to prefer modes of participation in Mozilla that aren't video-based. Maybe it will mean that those who want to work with me will be encouraged to consider doing the same, or maybe it will mean that I increasingly find myself on the outside. Knowing what I do of Mozilla, and its expressed commitment to working open, I'm hopeful that it will be the former. We'll see.

by David Humphrey at December 15, 2014 10:31 PM

December 14, 2014

Tai Nguyen

How Does GitHub Affect You

An interest article posted by Casey Ark on The Washington Post regarding his hardship in finding a job after graduating and receiving the proper education and credits. This person did well in school, he took the right courses and graduated at the top of his class in information systems; he did everything he possible that would give him better prospect with the hiring managers. He said that his degree was suppose to make him a qualified programmer, but by the time he left school all of the programming languages he’d learned had become obsolete. And so to find real work, he had to teach himself new technologies and skills outside of class. Graduates are finding that college education (conceptually-based) is not enough, most companies no longer care what their recruits majored in, since they would have to extensively train them regardless.
Companies are looking for recruits that have practical experience from the get go and can get things done. The issue with Universities is that they aren’t giving you the practical skills and experience to do well in real world.

Here is the link to the article:

Casey Ark ,The Washington Post Article

by droxxes at December 14, 2014 07:33 PM

The Cathedral and the Bazaar

Eric Steven Raymond, an American software developer and influential open source software advocate, wrote an intriguing book called “The Cathedral and the Bazaar” where he compares the different development styles. There are two different development styles; the “cathedral” model (the model preferred mostly by the commercial world) versus the “bazaar” model (the model of the open source world).

Important software (for instance operating systems like Microsoft’s Windows) was built like a cathedral, carefully developed by talented individuals in isolation. There are certain complex situation in which a more centralized and theoretical approach is necessary.

Open source projects, like Linux, on the other hand is developed by a community of developers that are scattered around the globe. The open source community can be analogize to a babbling bazaar out of which a coherent and stable system emerges. It consists of a collaborative collective of people who may have differing agendas and approaches, but who are ultimately working towards the same overarching goal.

The Cathedral and Bazaar Article

by droxxes at December 14, 2014 07:26 PM

Kieran Sedgwick

[OSD600] Release 0.4

My 0.4 release is smaller in scope than the last two, comprised of a single issue:

Configure default builtins and expose FileSystem API on shell instances (Issue #273)

This is a very interesting issue, and one that I suspect is far from finalized. The core idea was to make FileSystemShell objects act as wrappers for their bound FileSystem objects, and then have convenience methods available through custom shells that are shipped as separate modules.

One possible approach was to load the logic for these convenience methods from files existing in the filer tree, added by the extra shell module. These could then be run with Shell.exec() wrapped in a function, making for a flexible way of defining custom shells.

I chose a simpler approach, adding the FileSystem methods to the FileSystemShell instances and factoring the convenience methods into another internal module. This sets the stage for a couple of different ways to branch the Shell modules out, and this is what’s still to be determined.

A secondary issue came out of this one, when I discovered some weirdness in a couple of tests that needed to be fixed for this patch to land. They were distinct enough that I separated them. See that issue here.

PR: linky

by ksedgwick at December 14, 2014 05:07 PM

December 13, 2014

Tai Nguyen

Release Milestone 3 & 4 : Final Post


It’s been a long journey and a big learning experience through the DPS909 course for me. I will just talk briefly about my learning experience of the open source community, and then talk about my final release for milestone 3 & 4, which I will be doing together in one post instead of two separate posts. Initially before taking this course, I didn’t know much about the open source community, although I had a sense that it is where I wanted to take my first step into the real world of work. My problem was that I didn’t quite know where to start; I had questions like “How to get involve in an open source project?” and “How do you work with tools like GitHub and Git?”. Let’s just say, if the open source community is a deep pool of water, I would drown if I were to dive in. Long story short, reaching the end of this course and having contributed to a open source project known as Webmaker-App, I have reached the point where I am confident enough to be able to dive into an open source project and would be able to swim my way around with my strong hold of GitHub and Git.

Milestone 3 & 4:

Milestone 3 was a disaster for me. I think I got into an issue that was far beyond my capabilities. You can say I was a little in over my head. If you had read my release milestone 2 post, you would know that I had been working on an issue that required me to implement a toggle control component. The easy part was coming up with the CSS, the hard part was actually implementing it into the project itself. I’ll try to explain where I started to feel like I was in over my heads, partially due to the fact that I had never worked on the code base before. Long story short, most of my time was spent trying to understand the code base and trying to figure out how to implement my code, until it was too late, someone else took over and implemented their solution and solved the issue I was working on. I was upset initially, but looked towards the optimistic side of things. Instead, of quiting entirely, I decided it would do me some good if I tried to understand how this person implemented his solution of the issue I was having so hard to solve. After gaining some knowledge of the workings of the code base, I had to find another issue to work on (at this point I was way past the deadline). Luckily, I had found an issue that was simple enough but challenging enough where it would be a good bug to learn something new.

The issue that I had been working on is the one that involved me modifying the UI. Essentially, my task at hand with this issue was to replace the default browser’s input checkbox element with a bigger and more user friendly one. More information about the issue can be found here with this link “Issue: Sign In – UI adjustments to “Choose A Username” screen”. My first step at hand was to come up with a CSS mock up on CodePen. I wanted a visual mock up of how the checkbox was to look like and came up with this: Checkbox Mock-up. Once, I completed my CSS code of the checkbox and was happy with how it looked, I had to figure out how to implement it into the Webmaker-App project. The part that I struggled on was determining where to put the code, because Webmaker-App had issues with their organization of the CSS coding; basically the had problems where their CSS code for different modules was mixed up and overlapping and inconsistent. After seeking help from K88Hudson, I was successful in implementing my source code into the project and soon after made my pull request, which could be found here “Pull Request“.

Screenshot - 14-12-12 - 11:29:18 PM


To conclude, I am glad I have successfully made my first pull request, it may not be much, but I have gain the experience of working in and interacting in a open source community, and I am proud to have contributed to a project that is significant and relevant to society. With my developed skills and knowledge, I will continue pushing myself into contributing to open source projects.

UPDATE: I am please to inform you guys that my pull request has been successfully merged into the project :) my first successful pull request, hooray!

by droxxes at December 13, 2014 10:17 PM

December 12, 2014

Kieran Sedgwick

[SPO600] Final Report

For my final report for SPO600 I’m going to summarize my findings on the LAME package.

LAME Profiling

The x86_64 profile (without NASM code) stats showed that the majority of the program’s execution time was in the function called lame_encode_mp3_frame which, as it sounds, splits the source WAV file into frames, and then converts them into mp3 format one by one. While digging through the code was a challenge, I wasn’t able to identify the use of any intrinsics or inline assembler code.

Screen Shot 2014-12-11 at 10.35.32 PM

The profiling of LAME’s performance on aarch64 showed similar results, except when comparing execution time between the two systems. The aarch64 chip took a full six times(!) longer to finish the encoding of the same wav file. I’m not familiar enough with ARCH chips to know if it was a matter of hardware power or the efficiency of the algorithms. The good news was that it compiled and ran successfully in both places!

I would argue that this might mean porting isn’t worth the effort. A four minute song took just under a minute and a half to convert on the aarch64 system, versus twenty seconds on an Intel chip. Considering the huge difference, even without optimized code for the Intel chip, it might mean that the hardware itself on aarch64 machines just runs slower during this type of processing. I plan to follow up with the community to see what they think.

In any case, here’s what I figured out about the work required for a proper port:

Assessment of work

  1. Fully identify how the NASM code is used, and implement aarch64 equivalents. This would involve determining what the NASM code was replacing when it was included in compilation. I devoted some time to this, but not enough to confidently figure it out.
  2. Add aarch64 equivalents for disabling floating point exceptions. The utility file for the library contains a function that disables floating point exceptions by manipulating the FPU with intrinsics or inline assembly code, depending on compiler directives. I wasn’t able to fully determine the reasoning for the different pre-compilation code paths, but it was clear that some degree of porting would be required
  3. Update the configure script to include aarch64 equivalents to the NASM code. The Makefile would have to know to compile these, when to use them and what would be replaced by them. To fully mirror the NASM compatibility, the configure script would also need to include an option to explicitly enable/disable the aarch64 version.
  4. Log files would have to be updated to show the completed work.


I wasn’t able to do nearly as much work as I wanted, so my findings fall short of my own expectations. Despite this, I managed to identify most of what’s required for the porting process of this package and I’m hoping to have some time next semester (with a lighter courseload!) to follow up on this.

by ksedgwick at December 12, 2014 04:26 AM

December 11, 2014

Donna Oberes

Symfony site localization based on domain name

Problem: I want to localize my site based on the domain name.

Symfony strongly suggests that you use paths (like '/en' or '/fr') after the domain name to determine what the locale should be. This is ideal for a site with only one domain name, but for a site that has a different one for each localization, it's unnecessary. You should be able to determine the language based on the domain.

Solution: Use an event listener.

With an event listener, you can catch the request, parse the domain name, and set the locale appropriately. For this blog's purposes, let's say that a site has for its English site and www.frdomain for its French site.

Create this folder/file in your bundle: EventListener/LocaleListener.php. Inside, put this:

namespace My\CustomBundle\EventListener;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\HttpFoundation\Session\Session;
use Symfony\Component\Security\Http\Event\InteractiveLoginEvent;
use Symfony\Component\HttpKernel\Event\GetResponseEvent;
use Symfony\Component\HttpKernel\HttpKernelInterface;

class LocaleListener
public function setLocale(GetResponseEvent $event)
if (strstr(strtolower($_SERVER['HTTP_HOST']), strtolower('frdomain')))
$request = $event->getRequest();


If the HTTP_HOST contains the string 'frdomain', then set the locale to 'fr'.

Now register the listener in your bundle's services.yml file (it should be in Resources/config/). Inside, put this:

class: My\CustomBundle\EventListener\LocaleListener
- { name: kernel.event_listener, event: kernel.request, method: setLocale }

Now every time a request is sent to Symfony, this listener runs first and sets the locale to 'fr' if it detects 'frdomain' in HTTP_HOST. Otherwise, it keeps the default locale (in this case, it's en).

That should work! Happy coding!

by Donna_Oberes ( at December 11, 2014 04:12 PM

Glaser Lo

Final thoughts on OSD600

OSD600 is a quite special class to me.  Unlike other courses, this one is no longer just about submitting your works only, but also the whole experience of getting into the outside world.

The course covers everything about socializing in open source world, such as knowing the latest projects, using github, problem solving,  talking to people on irc, blogging, etc.  It is a guidance of bringing yourself to this open source culture, which I think is a very useful and important for many people like me.  It make you learn faster and feel comfortable no matter if you are bad or good in programming.  The more you getting in touch with people in open source world, the more you learn about things.  It is also about working with teams and strangers that most people will have to encounter in the future.  Assignments are helping yourself to be ready for the society.

That is why this course should be mandatory.  I would suggest new students taking this course otherwise they will lose a really good opportunity.  Since it is not a very hard course (except for people who don’t really care about others), It won’t affect other courses and you will gain more from this course than you do in other courses.  I will encourage my friend to take this course too. Dave is definitely a nice teacher to work with.

Well, being said that, due to the heavy load of assignments from other courses in this semester, I am not doing so well for this class. Hopefully, I will have enough time for OSD700 in the next one (if I pass this course). :)

by gklo at December 11, 2014 01:22 PM

Brackets (Release 0.4)


Pull request:

As the last release of the course, I decide to contribute to Adobe Brackets.  It is because Brackets is a big project and I heard good things about it from Habib.  When the first time I check out Bracket, I was surprised.  Brackets code are really well organized, fully commented and clean.  It made me revise my original impression of Adobe (since Adobe Flash).  I become having passion to work for this project.

The bug I picked for this release is implementing file type detection for extensionless file.  It was a quite challenging bug to me.  I decided to implement for bash script only at the moment.  At the beginning, it was not easy to find information about the syntax highlighting.  After a while, I realized that Brackets uses CodeMirror to handles the languages and syntax highlighting.  Though, I had no luck with it.  I ended up getting hints from the file LanguageManager.js and Document.js

Then, here came the question about the proper way to implement the functionality.  After getting help from redmund on irc, I got the ideas which are keeping detection in LanguageManager.js and it should not depend on Document object.  The first attempt of implementation wasn’t that good, but I think it was on the right track.  After getting a code review by dangoor, I got a clear mind about the code to write.  Instead of hard coding, adding new attribute to language object is proper way to normalize the detection process.  At the end of Wednesday, I finally finish it and wait for new feedbacks.

It was such a good experience to contribute to Brackets. Since it is a serious project, it encouraged me to be more careful when writing my code.  Also, there are doc comments,  indention, and unit test which are needed to be done whenever a new feature implemented.  Overall, I think I got interested in this kind of challenge project.  It is helpful for learning.

by gklo at December 11, 2014 06:34 AM

Ava Dacayo

Why take OSD600? – Open Source Development

First of all, I would like to suggest that the Open Source Development course be a mandatory course. I believe that every student should know and participate in open source as much as we are required to know about navigating and coding on mainframe. Don’t get me wrong, I have nothing against the IBC courses especially since it actually helped me finish a project while I was doing co-op at a bank (and mainframe will always be important) BUT I think OSD should also be a part of the core skills every students must learn.

At first I had the impression that OSD is somewhat exclusive for a certain group of programmers. Not everyone participates in it and so I never bothered especially since it was not required. But then when I was looking for a course to take and I couldn’t find any, I decided to face my fears and just take OSD – even though I absolutely had no idea on how it works. I somehow know how to use GitHub since I had an awesome teacher for OOP344, Cathy, who was patient enough to teach it to her students but I knew nothing when it comes to finding bugs online and fixing them.

Having Dave as a teacher in this course is just amazing. He is such a great instructor that I almost want to clap every time the class ends because I feel like I just witnessed a wonderful presentation. I also started becoming interested about the topics he talks about like the news about Microsoft open sourcing .NET. If I had just seen that anywhere online, I probably would not even bother reading it. I also found it interesting hearing how other programmers think and interact (like how they tweet and blog a lot). The topic about The Cathedral and the Bazaar from around week 2 also helped me understand how different cathedral built softwares are vs open source.

Another thing I like about OSD is that I can work on things that interests me. I think coming up with a mini project to work on is hard and that is one thing that open source helps me with since there are millions of projects available and I get to choose which ones I like. I have also been wanting to practice reading other people’s codes and the open source world is a great resource for that.


FSOSS shirt and stickers!

In addition to those mentioned, I’ve also learnt some few ideas like MVP (Minimum Viable Product) in where you release features that are sufficient enough for the meantime – you don’t have to have the entire project along with a thousand features ready before releasing it. Learning how to dive into codebases in where you don’t know a single line of code is also very handy. You don’t have to understand how the entire project works before working or fixing something in it. Just be good at finding out where you have to make your changes. I was also able to experience attending FSOSS – I guess it’s interesting attending something “geeky” from time to time?

All in all, I would say I had a fun experience in this class and it has definitely helped me get started contributing to open source. It is most likely something that I would continue even after the class ends. Thanks again to Dave Humphrey! BTW I am aware you tweeted one of my previous blogs and seeing in the stats that it has been viewed by people from different countries is another new experience for me.

Thanks for reading this everyone! – Ava

by eyvadac at December 11, 2014 05:07 AM

James Laverty

Remove a page they said...

Hey everyone!

I've took it upon me to try and fix a bug in Webmaker with the login screen where a page was redundant and they thought it should be removed. I eventually found the hidden page, but when I asked if they would like the page to be removed, I found out that the issue lie in the webmaker_login_ux. 

From there it I tried to find the source of the bug, there's an area in Webmaker that has a boolean value which when set to false, it should not show the page, it was set to false, but alas the page still reared it's ugly head. 

After checking and trying different 'fixes' I wasn't able to solve the problem, but I wont stop believing that I can do it. Hopefully after I'm done exams I can finish this bug and keep contributing to the Open Source community, it's been fun! 


James Laverty

by James L ( at December 11, 2014 03:52 AM

Android in Webmaker!

Hey everyone,

I just added source maps to Browserify. It was a very challenging process, because I talked to several different sources in the industry and the majority that I talked to had no idea what I was talking about. It was interesting because I ended up going out to lunch with people that were professionals in Java/Android, web development and PaaS. It was a fantastic experience, but it ended up in me being made fun of for asking question about Source maps and maybe I'm confused about my College lingo.

I find Open Source development very exciting, but to contrast it, I find doing it within a timeline terrifying, unless you have contacts within whatever you are trying develop.

This course, and my professor David Humphrey, have taught me a lot. Lets keep it going,


James Laverty

by James L ( at December 11, 2014 03:45 AM

Ava Dacayo

Release 0.4 – Fix issue #2150 (2nd fixed bug)

First, I may not have followed guidelines for this bug fix because I just worked on it without asking (I’M SORRY I’m trying to catch the due date for tonight) This bug is about App renaming in Mozilla Appmaker in where the user is allowed to rename an app while not signed in but gets an error message when Title is changed. The fix includes a check whether the user is currently signed in and if not, shows a “Please log in” first error message.

please sign in

Pull request can be found here.

by eyvadac at December 11, 2014 03:36 AM

Glaser Lo

Case Study on Firefox OS

Firefox OS

A whole new opensource operating system for smartphones, tablets, and smart TV. Built with Linux, Gecko, and Open Web Standard.


There are three parts in Firefox OS


License: Apache 2.0

A user interface library built on top of Gecko, written by HTML/CSS/JavaScript.  It is a framework that helps developer to make a good user interface and experience on Firefox OS.  All graphical components are written based on Gaia.


License: MPL 2.0

The core engine of Firefox and Firefox OS.  It handles nearly everything like rendering app screen, JavaScript runtime, allowing web app to have access to hardware. It also provides a compatible layer for all platform, making sure web app runs perfectly on any devices.


License: Apache 2.0

Gonk is basically a bundle including Linux kernel and a set of tools based on AOSP(Android Open Source Project).  It theoretically allows Firefox OS to run on any devices that supports Android.


Project homepage

- Contains very detailed and well organized information for developers

- Contains other information like Firefox OS staffs and work flow for the project




Gecko: (Read-only)


Andreas Gal


The Director of Research at Mozilla Corporation

  • Originally announce Boot2Gecko project on the mailing list in 2011
  • Gonk Maintainer
  • @andreasgal

Other people that worth to mention

Timothy Guan tin Chien


Vivien Nicolas



Gaia: 546 contributors

B2G/Gonk: 73 contributors

Gecko: 1539 contributors(github)

- A pretty big community especially for Gecko


Mailling list


The importance of the project

Developers are able to create cross-platform mobile application using HTML5/CSS/Javascript

Thinner abstract layer between WebAPIs and hardware in Firefox OS offers better performance and intuitive experiences

Helping to produce low cost smartphones for people who cannot afford expensive one

Spreading the concept of Open Web and Freedom

Who use it?

People in countries

24 countries

Like India, Hungary, Greece, Poland …

In Latin America, Europe, and Aisa

Developers !

by gklo at December 11, 2014 02:36 AM

[Late] Build Firefox on Windows 8.1

To Build firefox on Windows 8.1, there are a few steps needed.

  1. Install Visual Studio (2010 – 2013)
  2. Download and install mozilla-build bundle
  3. Runs start-shell-msvc2013.bat in C:\mozilla-build and It will give you a CLI environment for compilation using Visual Studio 2013.
  4. Cd to the root of the source folder that is downloaded from Mozilla repository
  5. Build it by running:
./mach build

Finally, Runs it:

./mach run

Thanks to Mozilla, the whole building process is pretty smooth without any configurations. :)

screenshot1  screenshot2screenshot3


by gklo at December 11, 2014 01:24 AM

Hosung Hwang

CordovaStablilizer – Build Android System WebView

Based on yesterday’s research, I searched build instruction for Android Webview (apk for auto-update)

In the Android Build Instructions (, there was only three build option for Content Shell, Chrome Shell and WebView Shell. However, in this Build Instructions (Android WebView) (, there is one more option. That is to build System WebView.

build/gyp_chromium -DOS=android -Dandroid_webview_telemetry_build=1
ninja -C out/Release system_webview_apk

To install onto the device, the existing WebView need to be removed, and then it can be installed. This would be possible only in Lollipop.

Uninstalling existing WebView :

adb root
adb remount
adb shell stop
adb shell rm -rf /system/app/webview /system/app/WebViewGoogle
adb shell start

Installing new WebView apk:

adb install -r -d out/Release/apks/SystemWebView.apk

I built it.

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/gyp_chromium -DOS=android -Dandroid_webview_telemetry_build=1
Updating projects from gyp files...
hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ ninja -C out/Release system_webview_apk
ninja: Entering directory `out/Release'

It seems to rebuild all files, maybe because of the gyp_chromium thing. 3.5 hours again.

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src/out/Release/apks$ ls -l
total 87496
-rw-rw-r-- 1 hosung hosung 20316933 Dec  3 17:07 AndroidWebView.apk
-rw-rw-r-- 1 hosung hosung 28574135 Nov 26 17:30 ChromeShell.apk
-rw-rw-r-- 1 hosung hosung 20310932 Nov 26 17:45 ContentShell.apk
-rw-rw-r-- 1 hosung hosung 20385767 Dec 10 19:02 SystemWebView.apk

size is 19.4MB
To test it, I need to make a Lolllipop emulator.

by Hosung at December 11, 2014 12:09 AM

December 10, 2014

Omid Djahanpour

PSmisc Wrap-up

PSmisc is a set of system administration tools, described in more details in one of my prior posts. I chose to work on this package as I like to do a lot of system administration, and wanted to take a closer look at what this collection of binaries has to offer.

This post will have a similar format to my previous post regarding Siege. The information as well as the code snippets will be laid out in a similar manner with a link to the profiling files at the end of the post.


  • Does not need porting
  • Does not need any optimization. Changes in performance are negligible
  • Performance is the same on both x86_64 and ARMv8-aarch64


First we must turn on the profiling option during compile time, in order to be able to collect program statistics as the binaries are executed.

To do this, we would use a command such as:

 ./configure CFLAGS=-pg LDFLAGS=-pg --prefix=$HOME/psmisc

This command provides the C compiler and the linker the options to turn on profiling (-pg). The rest of the compilation will go through as normal with the commands make and make install. The next step is the important one, where we actually gather statistics of the binaries that have been executed.

As I mentioned above, PSmisc is a collection of system administration tools, therefore, there are four binaries here that can be profiled.

Those four binaries are:

fuser identifies processes using files or sockets (similar to Sun’s
or SGI’s fuser)
killall kills processes by name, e.g. killall -HUP named
pstree shows the currently running processes as a tree
peekfd shows the data travelling over a file descriptor

Please Note

I will only be demonstrating profiling with pstree and fuser as I do not want to do any process manipulation as this is not my server.

Before I begin executing the binaries to collect statistical information, I want to make an important note.

The only assembly in this collection of binaries involves the use of prefetch instructions as seen below:

[odjahanpour@australia psmisc-22.21]$ egrep "__asm__|asm.*\(" -R *
src/lists.h:#  define asm                       __asm__
src/lists.h: *   echo '#include <asm-i386/processor.h>\nint main () { prefetch(); return 0; }' | \
src/lists.h:    asm volatile ("lfetch [%0]"    :: "r" (x))
src/lists.h:    asm volatile ("dcbt 0,%0"      :: "r" (x))
src/lists.h:    asm volatile ("661:\n\t"

 65 #if   defined(__x86_64__)
66     asm volatile ("prefetcht0 %0"  :: "m" (*(unsigned long *)x))
67 #elif defined(__ia64__)
68     asm volatile ("lfetch [%0]"    :: "r" (x))
69 #elif defined(__powerpc64__)
70     asm volatile ("dcbt 0,%0"      :: "r" (x))
71 #elif !defined(__CYGWIN__) && !defined(__PIC__) && defined(__i386__)
72     asm volatile ("661:\n\t"
73                   ".byte 0x8d,0x74,0x26,0x00\n"
74                   "\n662:\n"
75                   ".section .altinstructions,\"a\"\n"
76                   "  .align 4\n"
77                   "  .long 661b\n"
78                   "  .long 663f\n"
79                   "  .byte %c0\n"
80                   "  .byte 662b-661b\n"
81                   "  .byte 664f-663f\n"
82                   ".previous\n"
83                   ".section .altinstr_replacement,\"ax\"\n"
84                   "   663:\n\t"
85                   "   prefetchnta (%1)"
86                   "   \n664:\n"
87                   ".previous"
88                   :: "i" ((0*32+25)), "r" (x))
89 #else
90     __builtin_prefetch ((x), 0, 1);
91 #endif

As you see, there is asm included specifically for the x86_64 architecture and a C fall back at line 90. I did some research on the prefetch instruction and came across this post on the StackExchange network which I felt is important to point to. I will touch  on this point later on in this post.

I also wanted to point out, that the lists.h file, is specified in the Makefile for compiling the fuser binary as seen below:

fuser_SOURCES = fuser.c comm.h signals.c signals.h i18n.h fuser.h lists.h

This is the only reference to this file in the Makefile.

Gathering Profiling Information

To view any profiling information, we must first execute the binary we want to collect information from. I will start with the easiest binary available pstree.


As the name suggests, when executed, this will print out the running processes in a tree format similar to the way the tree command works on Unix systems.

Pstree does not require any arguments, but for those of you that are interested, here is the output of the pstree usage cases:

[odjahanpour@australia bin]$ ./pstree --help
./pstree: unrecognized option '--help'
Usage: pstree [ -a ] [ -c ] [ -h | -H PID ] [ -l ] [ -n ] [ -p ] [ -g ] [ -u ]
[ -A | -G | -U ] [ PID | USER ]
pstree -V
Display a tree of processes.

-a, --arguments     show command line arguments
-A, --ascii         use ASCII line drawing characters
-c, --compact       don't compact identical subtrees
-h, --highlight-all highlight current process and its ancestors
--highlight-pid=PID highlight this process and its ancestors
-g, --show-pgids    show process group ids; implies -c
-G, --vt100         use VT100 line drawing characters
-l, --long          don't truncate long lines
-n, --numeric-sort  sort output by PID
-N type,
--ns-sort=type      sort by namespace type (ipc, mnt, net, pid, user, uts)
-p, --show-pids     show PIDs; implies -c
-s, --show-parents  show parents of the selected process
-S, --ns-changes    show namespace transitions
-u, --uid-changes   show uid transitions
-U, --unicode       use UTF-8 (Unicode) line drawing characters
-V, --version       display version information
PID    start at this PID; default is 1 (init)
USER   show only trees rooted at processes of this user

It seems as though pstree does not recognize the argument -h or –help, but it is sufficient enough to get it to dump out its usage cases and the argument list that you can provide it. For more details, you can also take a look at its man page.

Simply executing pstree from the directory it was compiled in is enough for it to generate a new file called gmon.out. This file contains the profiling information we want to analyze in order to view what’s going on when we run this binary.

You cannot simply read this file using your favorite text editor, as it is not a standard file as seen below:

[odjahanpour@australia bin]$ file gmon.out
gmon.out: GNU prof performance data - version 1

In order to read the contents of this file, we must do the following:

[odjahanpour@australia bin]$ gprof ./pstree gmon.out > analysis.txt

Note: I am calling the file analysis.txt, however, you can save the output file to any name you like.

Once you open the file, you will see the data in a similar format where there are columns such as the ones below:

% time cumulative seconds self seconds calls self ms/call total ms/call name

It is important that we first understand what these columns signify before we continue. Below, I am going to list what these columns mean according to the descriptions found within the file itself:

% time the percentage of total running time of the program used by this function.
cumulative seconds a running sum of the number of seconds accounted for by this function and those listed above it
self seconds the number of seconds accounted for by this function alone. this is the major sort for this listing.
calls the number of times this function was invoked, if this function is profiled, else blank.
self ms/call the average number of milliseconds spent in this function per call, if this function is profiled, else blank.
total ms/call the average number of milliseconds spent in this function and descendents per call, if this function is profiled, else blank.
name the name of the function. this is the minor sort for this listing. the index shows the location of the function in the gprof listing. if the index is in parenthesis it shows where it would appear in the gprof listing if it were to be printed.

I will not be pasting the entire contents of the analysis output, but I will include a link to it at the end of this post where you may download and analyze the full file.

Profile Statistics x86_64
% time cumulative seconds self seconds calls self ms/call total ms/call name
0.00 0.00 0.00 4064 0.00 0.00 out_char
0.00 0.00 0.00 1476 0.00 0.00 get_ns_name
0.00 0.00 0.00 1052 0.00 0.00 tree_equal
0.00 0.00 0.00 493 0.00 0.00 find_proc
Profile Statistics ARMv8-aarch64
% time cumulative seconds self seconds calls self ms/call total ms/call name
100.08 0.01 0.01 857 0.01 0.01 out_char
0.00 0.01 0.00 1002 0.00 0.00 get_ns_name
0.00 0.01 0.00 355 0.00 0.00 find_proc
0.00 0.00 0.00 167 0.00 0.00 new_proc

We can see that the out_char function uses 100% of the program execution time, and when compared to that of the x86_64 platform, it may seem like a lot, however, the numbers here are so negligible with real-world performance as both programs executed successfully at about the same time. To show this, I have included the time it took for both programs to execute the same thing:


real    0m0.042s
user    0m0.004s
sys     0m0.013s


real    0m0.017s
user    0m0.010s
sys     0m0.000s

From the output above, we can even see that it took the ARMv8 system less time than the x86_64 machine to do the same command. This alone is a bad representation as the x86_64 machine had a lot more processes to print compared to that of the ARMv8 machine.


Show which processes use the named files, sockets, or filesystems.

Returning to the note I had mentioned above about prefetching, I want to indicate that fuser is the binary that uses the prefetch instructions. For this reason, to demonstrate that there was no need to write any architecture specific prefetch instructions for the ARMv8-aarch64 machine, I compiled two versions of the PSmisc utilities. The first build, includes the architecture specific code for x86_64, whereas the second build, I removed the prefetch instruction so that it relies on the C fall back which will utilize the GCC __builtin_prefetch instruction.

x86_64 With Prefetch Instruction
% time cumulative seconds self seconds calls self ms/call total ms/call name
0.00 0.00 0.00 549 0.00 0.00 get_pidstat
0.00 0.00 0.00 183 0.00 0.00 check_dir
0.00 0.00 0.00 183 0.00 0.00 check_map
0.00 0.00 0.00 183 0.00 0.00 getpiduid

real    0m0.080s
user    0m0.016s
sys     0m0.053s

x86_64 With GCC __builtin_prefetch Instruction
% time cumulative seconds self seconds calls self ms/call total ms/call name
0.00 0.00 0.00 549 0.00 0.00 get_pidstat
0.00 0.00 0.00 183 0.00 0.00 check_dir
0.00 0.00 0.00 183 0.00 0.00 check_map
0.00 0.00 0.00 183 0.00 0.00 getpiduid

real    0m0.115s
user    0m0.018s
sys     0m0.063s

% time cumulative seconds self seconds calls self ms/call total ms/call name
0.00 0.00 0.00 435 0.00 0.00 get_pidstat
0.00 0.00 0.00 145 0.00 0.00 check_dir
0.00 0.00 0.00 145 0.00 0.00 check_map
0.00 0.00 0.00 145 0.00 0.00 getpiduid

real    0m0.182s
user    0m0.010s
sys     0m0.020s
As you can see, there was no difference between the two builds on the x86_64 system, or the aarch64 machine.

Final Thoughts

PSmisc is a nice bundle of utilities that system administrators can use to quickly query information. It is a simple and small bundle of utilities that will work on any architecture as it did not require any porting to run on the aarch64 machine. Its only use of assembly was for prefetch, which made no significant difference as the C fall back will use the GCC __builtin_prefetch instruction specific to whichever architecture it is running off of.

It is possible to say that the architecture specific asm instructions for prefetch can be removed completely in order to use the __builtin_prefect instruction from GCC, however, I do not have enough knowledge to make this decision, therefore I am leaving the code the way it is. With further examination, and a lot more research, I may be able to conclude that using GCC’s built in prefetch instruction will suffice, but until then I do not want to change the source as the utilities provided from this package works flawlessly on any architecture.

Profiling Files



by Omid Djahanpour at December 10, 2014 04:13 AM

Siege Wrap-up

When I first began looking through the list of available packages to work on from the Linaro Performance Challenge website, I immediately noticed Siege and due to using it in one of my other classes, thought it would be fun to dig through its internals to see how it works.


  • No porting necessary
  • Does not require optimization
  • Same performance on x86 vs ARMv8-aarch64


As you are reading along, I will be posting snippets from the profiling I’ve done and explain it to the best of my capability.

In a previous post, I showed how I compiled Siege, however when I compiled Siege, I did not turn on the profiling options required to do profiling.

The only difference in the compile process to turn on profiling, is that you need to provide the C compiler and the linker the options to turn on profiling. This option is the -pg switch. To do this from the command line, it would look something like this:

./configure CFLAGS=-pg LDFLAGS=-pg --prefix=$HOME/siege

The rest of the compilation would be the same process of make and make install. Once installed, navigate to the directory that contains the binaries for Siege, and run siege once. If you are unsure on how to run Siege, you could simply run:

[odjahanpour@australia bin]$ ./siege -h
SIEGE 3.0.8
Usage: siege [options]
siege [options] URL
siege -g URL
-V, --version VERSION, prints the version number.
-h, --help HELP, prints this section.
-C, --config CONFIGURATION, show the current config.
-v, --verbose VERBOSE, prints notification to screen.
-q, --quiet QUIET turns verbose off and suppresses output.
-g, --get GET, pull down HTTP headers and display the
transaction. Great for application debugging.
-c, --concurrent=NUM CONCURRENT users, default is 10
-i, --internet INTERNET user simulation, hits URLs randomly.
-b, --benchmark BENCHMARK: no delays between requests.
-t, --time=NUMm TIMED testing where "m" is modifier S, M, or H
ex: --time=1H, one hour test.
-r, --reps=NUM REPS, number of times to run the test.
-f, --file=FILE FILE, select a specific URLS FILE.
-R, --rc=FILE RC, specify an siegerc file
-l, --log[=FILE] LOG to FILE. If FILE is not specified, the
default is used: PREFIX/var/siege.log
-m, --mark="text" MARK, mark the log file with a string.
-d, --delay=NUM Time DELAY, random delay before each requst
between 1 and NUM. (NOT COUNTED IN STATS)
-H, --header="text" Add a header to request (can be many)
-A, --user-agent="text" Sets User-Agent in request
-T, --content-type="text" Sets Content-Type in request

Once Siege finishes running, you will notice that a new file was created called gmon.out. Running the file command on this file will provide some basic information on what kind of file this is:

[odjahanpour@australia bin]$ file gmon.out
gmon.out: GNU prof performance data - version 1

This file contains profiling information for the last executed command of the binary. To access the contents of the file, you would issue a command like this:

[odjahanpour@australia bin]$ gprof ./siege gmon.out > analysis.txt

The output file is called analysis.txt, however, you can specify any file name you like.

Now to view the actual profiling statistics, you would simply read the file analysis.txt (or whatever you have named your file as).

Once you view the file, you see that the data is sorted into multiple columns with column headers defined like so:

% time cumulative seconds self seconds calls self ms/call total ms/call name

Taken directly from the file itself, this is what those columns signify:

% time the percentage of total running time of the program used by this function.
cumulative seconds a running sum of the number of seconds accounted for by this function and those listed above it
self seconds the number of seconds accounted for by this function alone. this is the major sort for this listing.
calls the number of times this function was invoked, if this function is profiled, else blank.
self ms/call the average number of milliseconds spent in this function per call, if this function is profiled, else blank.
total ms/call the average number of milliseconds spent in this function and descendents per call, if this function is profiled, else blank.
name the name of the function. this is the minor sort for this listing. the index shows the location of the function in the gprof listing. if the index is in parenthesis it shows where it would appear in the gprof listing if it were to be printed.

I will not be pasting the entire contents of the file as it is quite long, instead I will only paste what I believe is important. I will, however, attach the file to this post for anyone interested in downloading a copy to view.

Profile Statistics x86_64

1st Execution
% time cumulative seconds self seconds calls self ms/call total ms/call name
50.00 0.01 0.01 97354 0.00 0.00 socket_read
50.00 0.02 0.01 163 0.06 0.06 url_get_method
0.00 0.02 0.00 99445 0.00 0.00 socket_check
0.00 0.02 0.00 92612 0.00 0.00 echo

Looking at the table above, we see that the total % of time for running Siege is split even between two functions socket_read and url_get_method. What’s interesting here is that, if I execute Siege a second time using the same arguments, and I take a look at the new analysis.txt file, the results are drastically different:

2nd Execution
% time cumulative seconds self seconds calls self ms/call total ms/call name
0.00 0.00 0.00 92545 0.00 0.00 __socket_check
0.00 0.00 0.00 90592 0.00 0.00 socket_read
0.00 0.00 0.00 86028 0.00 0.00 echo
0.00 0.00 0.00 1215 0.00 0.00 auth_get_proxy_required

I’m actually not sure why this is the case, but please feel free to leave a comment if you know the cause for this.

Profiling Statistics ARMv8-aarch64

1st Execution
% time cumulative seconds self seconds calls self ms/call total ms/call name
100.00 0.01 0.01 293 0.03 0.03 __socket_block
0.00 0.01 0.00 88401 0.00 0.00 socket_check
0.00 0.01 0.00 86545 0.00 0.00 socket_read
0.00 0.01 0.00 82354 0.00 0.00 echo
2nd Execution
% time cumulative seconds self seconds calls self ms/call total ms/call name
0.00 0.00 0.00 91835 0.03 0.03 __socket_check
0.00 0.00 0.00 89913 0.00 0.00 socket_read
0.00 0.01 0.00 85212 0.00 0.00 echo
0.00 0.01 0.00 1189 0.00 0.00 url_get_port

Execution Times — No Profiling

I have been testing Siege using my WordPress URL, simulating 10 concurrent users and for 10 seconds using the HEAD request throughout this whole post. Now, I’m going to do the same thing, however, I will be recompiling Siege to turn off profiling and this time, I will be using the time command when executing Siege


real 0m9.329s
user 0m0.822s
sys 0m0.059s


real 0m9.682s
user 0m1.300s
sys 0m0.200s

Using the same commands on both platforms, and using the time command to view how long it took to execute and run Siege, we see that there is almost no difference between the two on either platform.

The reason why I say this, is because when I dug through the source code, I noticed that the only assembly code in the source, was in a file called md5.h, which I took a closer look at in a previous post. The logic flow for the file where the asm is present, only applies if the system is an i386, which these days is quite rare.

Final Thoughts

Both platforms are using the same code and it is running efficiently. Siege is not a very big program, and it’s execution times are fairly quick, and there are many variables that you must take into account for when wanting to benchmark this kind of program as this program itself is a benchmark utility. For instance, the above results for the execution time are not something that you can really rely on, as I had used the -t10s argument for Siege, which means benchmark this URL for 10 seconds. Therefore, the time command would have to be somewhere around 10 seconds as it will take Siege around 10 seconds to finish execution. If for any reason the execution time was say 50 seconds for the ARMv8-aarch64, then we could say that there is something wrong here and that the code needs to be optimized for this platform.

I feel that Siege is ready to be marked off as complete, and through my testing, I know that it will compile and run fine on not just the x86_64 or ARMV8-aarch64 platform, but rather any platform that can run C code.

Profiling Files


1st execution

2nd execution


1st execution

2nd execution

by Omid Djahanpour at December 10, 2014 01:07 AM

Hosung Hwang

CordovaStabilizer – Android Webview History and characteristics

Android WebView History

Recently, Android WebView has been changed significantly.

  • Android Cupcake ~ Jelly Bean (1.0 ~ 4.3.x): Custom WebKit-based WebView
  • Android KitKat (4.4 ~ 4.4.4) : Chromium 30/33-based WebView
  • Android Lollipop (5.0) : Unbundled evergreen WebView, autoupdated via Play Services

By using Chromium in KitKat WebView, there are new capabilities:

  • IndexedDB
  • Web Sockets
  • requestAnimationFrame
  • SVG filters and Effects
  • H/W accelerated rendering
  • Significantly faster than old WebView

Changes form KitKat to Lollipop include:

  • WebRTC
  • WebAudio
  • WebGL
  • Auto-updating

Auto-update is interesting. Before lollipop, WebView was a part of the Android Framework, and to update it, Android OS need to be updated. However, from Lollipop, WebView can be updated by security/bug patch reason using Google Play. Which means, within the chromium or android source code, there might be stand-alone WebView build environment. (I couldn’t find yet)

Difference between Chrome and WebView

  • Chrome : multi-process
  • Webview : – Single-process
    – provide hooking to override cookies/networking
    – provide injecting javascript

Reference : 

by Hosung at December 10, 2014 01:00 AM

December 09, 2014

Omid Djahanpour

The End Is Near

This post, along with the few others to follow, will be wrapping up my work that has been done throughout the course for SPO600, as well as working with the Linaro Performance Challenge.

I’m still not sure if I would have the time to maintain a blog, or if I will be motivated to do so.  I do want to say that it’s been a fun semester, and I’ve learned a lot with this course, as well as my other courses.

I will not say my farewells just yet as I plan on wrapping up the things I have done with Siege and PSmisc. I will have these posts published within a few hours from now so stay tuned!

by Omid Djahanpour at December 09, 2014 08:04 PM

Shuming Lin

Release 0.4 – issue#570

I am comfortable work with webmaker-app and my release 2 and 3 were merged,  I am very happy with that. However, I got a message from Kate for release 3 issue. She wanted me fix the issue in other ways and different file. However, I resolved what she want at that day night, and made a pull request.

This time I am still work with Webmaker-app issue for release 4 last week which is ” Sign In – Form UX issues during sign up #570″. The problem is ” Upon entering the sign-up screen, the user is presented with a form validation error upon bringing the email field into focus. After entering a valid email address, the user then needs to press “out” of the form to cause the validation error to clear. Pressing return on the virtual keyboard does not dismiss the control.”

To work with this issue, need to set environment for webmaker run in Android. Follow documentation that need to Install Firefox Nightly. and Installing Firefox OS Simulator. The Firefox OS Simulator is a version of the higher layers of Firefox OS that simulates a Firefox OS device, but runs on the desktop. This means that in many cases, you don’t need a real device to test and debug your app. It’s not like develop a Android APP in SDK eclipse which you can run you APP in device. After finished setup environment and run the app, you will see screen like:

In past few days I was tried to fix this bug, but doesn’t successfully . Finally, I find out there has a create account source file is from “webmaker-login-ux” which webmaker app use. So I think this problem from “webmaker-login-ux” source code.


I found problem in “webmaker-login-ux” that may solve this issue:


I would ask him for help about this problem and am comfortable keep work with webmarker-app in future.

by Kevin at December 09, 2014 07:37 AM

Conclusion: Open Source Course

This week is last week of semester. I am pretty happy took OSD600 course in David’s class. OSD600 is about Open Source Development. In this semester, I have learned the technological skills which are JS language, JSon, Node.js, Bracket, use GIT and know more about open source in OSD600 and work with Open Source project. Working with Open Source Project is very interest, and also your skills will get improve. The Open Source is a way that you can learn new language or get improve because it’s real project. You will work with all developer from everywhere in the work and also you will make friendship. Most Open Sources are in GitHub. However, I am very enjoy work with open source and it’s fun. I find out there are many great open source tool which very useful such as Bracket, webmaker-app and others. If you are a web developer , I recommend that you may try use Bracket tool to develop a web. If there has other course about Open Source, I would like to take it and keep learning. Also I will keep work with Open Source Project, because it’s not only improve my skills. Also, you are contributor and help change world.

by Kevin at December 09, 2014 04:06 AM

Linpei Fan

Project Release 0.4

In my project release 0.4, I still worked on the webmaker app project. I was working on the issue#569, which are the UI adjustments on Sign In. The code that I fixed is in webmaker-login-ux, which webmaker app uses for log in. My pull request is here. And the screenshot after fixing is as follows:

by Lily Fan ( at December 09, 2014 02:46 AM

Andrew Li

Release 0.4

For release 0.4 I worked on issue #2343, a proposed brick idea for speaking and highlighting text word by word.

I approached this brick by trying to find out whether there were any existing speech API’s available for the web. After some searching I found the World Wide Web Consortium (W3C) Javascript API that lets developers use speech synthesis for generating text-to-speech output on a web page. The API provides methods for speaking, canceling, pausing and resuming an utterance.

Currently, the only browsers that support web speech working out of the box are Chrome version 31 or greater and Safari version 7 or greater.

We can detect whether or not a browser supports web speech using javascript

if ( ('speechSynthesis' in window) ) { 
    alert('Your browser supports speech synthesis');

We can get a list of voices with getVoices() method which will return us an array with available voice objects. You may get something similar:

default: true
lang: "en-US"
localService: true 
name: "Alex"
voiceURI: "Alex"

For the time being, I took a mash of the audio component and textbox component for developing the brick’s interface.

Highlighting text was not too hard after reading the utterance events section of the API. We can get the position of the utterance using charIndex by attaching an event listener for the boundary event.

this.$.utterance.addEventListener("boundary",function(e) {

With this approach there is a slight delay between the actual utterance and when a word gets highlighted since the boundary event needs to fire (utterance spoken first) before the highlighting can begin.

Other issues that will need fixing:

  1. stop utterance when brick is removed from the card
  2. highlighting for Japanese (all other languages seem to highlight properly)

Apart from the slight issues, the brick works as described - ‘it goes through text while speaking and highlights word by word’.

I have not created a pull request yet, as bugs need to be fixed and suggestions will come up but so far here is the code: issue2343_webspeech.

December 09, 2014 12:00 AM

Release 0.4

For release 0.4 I worked on issue #2343, a proposed brick idea for speaking and highlighting text word by word.

I approached this brick by trying to find out whether there were any existing speech API’s available for the web. After some searching I found the World Wide Web Consortium (W3C) Javascript API that lets developers use speech synthesis for generating text-to-speech output on a web page. The API provides methods for speaking, canceling, pausing and resuming an utterance.

Currently, the only browsers that support web speech working out of the box are Chrome version 31 or greater and Safari version 7 or greater.

We can detect whether or not a browser supports web speech using javascript

if ( ('speechSynthesis' in window) ) { 
    alert('Your browser supports speech synthesis');

We can get a list of voices with getVoices() method which will return us an array with available voice objects. You may get something similar:

default: true
lang: "en-US"
localService: true 
name: "Alex"
voiceURI: "Alex"

For the time being, I took a mash of the audio component and textbox component for developing the brick’s interface.

Highlighting text was not too hard after reading the utterance events section of the API. We can get the position of the utterance using charIndex by attaching an event listener for the boundary event.

this.$.utterance.addEventListener("boundary",function(e) {

With this approach there is a slight delay between the actual utterance and when a word gets highlighted since the boundary event needs to fire (utterance spoken first) before the highlighting can begin.

Other issues that will need fixing:

  1. stop utterance when brick is removed from the card
  2. highlighting for Japanese (all other languages seem to highlight properly)

Apart from the slight issues, the brick works as described - ‘it goes through text while speaking and highlights word by word’.

I have not created a pull request yet, as bugs need to be fixed and suggestions will come up but so far here is the code: issue2343_webspeech.

December 09, 2014 12:00 AM

December 08, 2014

Ava Dacayo

Release 0.4 – Fix #2353

Again, this bug is about the Media Player brick in Appmaker showing twice whenever it is being loaded. Later I found out that the same behaviour happens whenever you click on the “Duplicate this brick” button as well. I’m currently waiting for them to look at my pull request so that I can know if I have to change anything. I have tested this fix so far only on clicking the “Duplicate this brick” button because I can’t seem to find a “Save App” button anywhere when running Appmaker locally.

The Media Player brick currently has a default video that loads up whenever you create one. Every time you select the brick, it calls the initPopcorn() function which then works with popcorn.js to create the player. The bug happens whenever you have changed the url for the source of the video and then duplicate the brick(or possibly load the saved app which I did not get the chance to test). This calls initPopcorn() twice – first from ready: and then urlChanged: – which gets triggered when the source gets changed. The quick fix I added was to put a check in ready: to see if it’s the original url, if it is, that’s the only time initPopcorn() will be called from ready:.

It’s a one liner fix that drove me nuts especially since my dev env got messed up last time (still waiting for a feedback but hey it works!) . I am super glad and very thankful to the people who created the Developer tools in Chrome and to Dave who showed that in class – this made my life easier.

by eyvadac at December 08, 2014 10:53 PM

Lukas Blakk (lsblakk)

Ascend New Orleans: We need a space!

I’m trying to bring the second pilot of the Ascend Project to New Orleans in February and am looking for a space to hold the program. We have a small budget to rent space but would prefer to find a partnership and/or sponsor if possible to help keep costs low.

The program takes 20 adults who are typically marginalized in technology/open source and offers them a 6 week accelerated learning environment where they build technical skills by contributing to open source – specifically, Mozilla. Ascend provides the laptops, breakfast, lunch, transit & childcare reimbursement, and a daily stipend in order to lift many of the barriers to participation.

Our first pilot completed 6 weeks ago in Portland, OR and it was a great success with 18 participants completing the 6 week course and fixing many bugs in a wide range of Mozilla projects. They have now continued on to internships both inside and outside of Mozilla as well as seeking job opportunities in the tech industry.

To do this again, in New Orleans, Ascend needs a space to hold the classes!

Space requirements are simple:

* Room for 25 people to comfortably work on laptops
* Strong & reliable internet connectivity
* Ability to bring in our own food & beverages

Bonus if the space helps network participants with other tech workers, has projector/whiteboards (though we can bring our own in), or video capability.

Please let me know if you have a connection who can help with getting a space booked for this project and if you have any other leads I can look into, I’d love to hear about them.

by Lukas at December 08, 2014 10:48 PM

December 07, 2014

Gary Deng

OSD600 Release 0.4 Done

It’s a little bit late for the this final release, but I am very exciting that I am able to fix the bug in Mozilla Appmaker project. I got some hints from Appmaker team, and I completely rewrote the code from my previous pull request (Release 0.2). The following screenshots are the test results:


1. Before channel “A” is disabled:

2. After channel “A” is disabled:
3. Before channel “B” is disabled:
4. After channel “B” is disabled:

Both listen-menu and broadcast-menu UI are rendered as expected. What I have learned in this release?

  • General ideal of Polymer,
  • JavaScript DOM Elements Traversing such as parentNode, childNotes, classList, hasAttribute(), and contains() …
  • Chrome debug tools

by garybbb at December 07, 2014 08:40 PM

James Laverty

Attempting to install Chromium

Imagine waking up to a knocking at your door <knock knock> <knock knock> . You get up, dress yourself and approach half sleeping, pull the door open and look with your eyes half open, see a soldier marked UPS. After a shake of your head you realize it's just a deliveryman and get excited with the remembrance of the laptop you ordered two weeks ago!

Start it up, share a moment of unbridled joy as it starts instantaneously, then weep as the hard drive dies like a car accident gone terribly wrong. <knock knock> The laptops been replaced!

Now to where I was two weeks ago, I was with my new computer and a newbie guide to Open Source, I had eyes bright and spirits high. Like a land slide my hopes started to fade, I had error after error, but kept on going.

I attempted to install it for approximately 5-7 hours. After that I left it running overnight, it probably did not take to long, but it failed. When I woke up I tried it again, it failed again.Eventually I ended up in the land of rebase and I felt defeated. I was worried about starting over again as well as to rebase.

I decided on attempting to install Lynx, and things went far smoother,          

Cheers ,

James Laverty

by James L ( at December 07, 2014 02:12 AM

December 06, 2014

Yasmin Benatti

Release 0.4

My fourth release was also within translation and localization. Some of the resources that I had translated on Release 0.3 were updated, some were removed and some were added. I’m really enjoying doing this. It is a nice task, that fits with my skills and desires.

One of the web sites that I’m translating is this one. It is the web site for Mozilla’s fundraiser 2014 (Enjoy that you are here and make a donation too!!). Overall, it was a simple project to work with, the only challenges I had this time were the ones I had on Release 0.3, such as literal translations, missing HTML tags and translation’s conflicts. I would like to highlight that I’m learning a lot of English, what is great for me once it is my target number one while being in Canada. I really really really liked the translations from the user “henrique20″ on Webmaker Explore repository. He has a very clean way to write and his translations fit on the Brazilian’s web sites style.

Portuguese-BR is one of the most translated languages on the fundraiser repository. It is one of the only ones that are completely translated and reviewed for the Mozilla-2014-EOY-Campaign archive. On the pictures below light green means translated and dark green means reviewed.

Mozilla Fundraiser

Mozilla Fundraiser Repository


Mozilla Fundraiser

Mozilla Fundraiser


It is a pleasure to work with Mozilla’s translation. I’ll keep my eyes open to the new resources even though the open source classes are done. I’ll also look for more stuff in the area, like the Twitter’s localization web site.


by yasminbenatti at December 06, 2014 08:28 PM

Gideon Thomas

The deeper you go…the more you find!

Hey everyone!

I’m sorry for being away for a while. But now that school is almost done, I can write more blog posts :)

So I have been working on my release for the Open Source class this past week. I decided to revisit an old bug that I had discovered and began fixing in Filer but never got around to completing it after running into a roadblock.

The bug I was working on dealt with not being able to rename a directory. I found this while trying to use this feature in MakeDrive. At first, I thought that this bug would be fairly simple to solve since all it effectively did was a delete/create procedure. Turns out it was not as simple as that. I changed the way the create/delete occurred to account for the difference between a file and a directory. But that created one of the most bizarre problems I have seen. Somehow, the node that was created magically disappeared in between steps and so when I tried to access that node, it would fail badly. I dug around a bit at that time but had not luck figuring out what was wrong.

Problem was, I associated the “missing” problem with the “deletion” part of the rename. This did not seem like a big logical leap for me and I went with it. But when I re-visited this problem a week ago, after looking into how the node was being deleted, I realized that it would probably make sense to check out how the creation happened. I dug around a lot and an obvious problem presented to me a few days in.

The creation was happening by creating a “hard” link (basically a copy of the metadata) to the old node. This had one major implication that I did not account for. The IDs of the link and the old node would be the same. So, when we attempt to delete the old node, as deletion happens by ID, it deletes both nodes. This is why the new node vanished. I had finally figured it out.

But that was not all! While going through how hard links were generated, I noticed during my research online that directories were not really encouraged to have hard links. Well, our code did not deal with that and hence came bug #2.

Since I took the effort to dig deep into the code, in true keener fashion I decided to continue the rename patch as close to the IEEE specs as possible (which basically means a lot of extra functions). At the same time I fixed the link on directories problem.

It took me a while (and I had to go over the deadline a bit) but I ended up with more than I had set out to fix.

P.S. Stay tuned for a blog post soon on how I learnt Redis!

by Gideon Thomas at December 06, 2014 07:04 AM

Gary Deng

OSD600 Expands my “Open Source” Thinking

Today is the last day of this semester, and I am still working on my final release of OSD600 project. During the past fourteen weeks’ study in my Open Source Development (OSD600) course, I have learned the technological aspects (advanced JavaScript knowledge, how to use Git, Node.js, and a bunch of popular open source tools such as Polymer,Grunt, and many more), social aspects (how to get help from other open source developers via IRC), and pragmatic aspects (direct involvement in the Mozilla Appmaker project) of developing open source software.

Moreover, this course expands my Open Source thinking. I start to prefer using open technologies rather than exclusive technologies. I value group-based problem solving and prefer tools that allow for social collaboration and sharing. In addition, I am quite comfortable to use existing open source code to solve business problem. Look forward to learning more from OSD700!

by garybbb at December 06, 2014 04:57 AM

Ryan Dang

Final thought on Intro to Open Sources course

There are few reason why I decided to take open source as one of my professional options. It doesn’t have tests or exams, It gives students opportunities to work on some actual projects, Seneca doesn’t offer many professional options that focus on Web developments and few other reasons. I was really happy when I found out that there are few projects related to web development that were offered to students to work on. I chose to focus on web maker app project. I think it is still in early stage with not a lot of code and I can get involved in a lot easier comparing to other projects. All my pull requests for the projects can be found here

I think the course is pretty interesting. However, I think it would be nice if the students can have more options on what projects they can work on. There are thousands of open sources projects out there and to limit the choices to 7-8 or projects might hinder the students ability. Maybe students should have option to pick any open sources projects they like to work on. I have no complaint about the projects that were offered because I know a bit about node.js before. However, some students I know were complaining about all projects using Node.js and they are not familiar with it. Students will be a lot more productive and more enthusiastic to work on projects that they are personally interested in.

by byebyebyezzz at December 06, 2014 04:51 AM

Hosung Hwang

CordovaStabilizer – Chromium WebView and Android Source Code

So far, I checked :

  1. Building Cordova
  2. Building Chromium WebView for Android

In Chromium WebView, there was some issue :

  1. doesn’t work in Android 4.0.2
  2. doesn’t work in emulators

First issue is because of internal implementation of MediaDrmBridge. I didn’t discover whether it is not implemented in Icecream Sandwich or there are other problems. When I deleted 1 line, it worked.

*ADD : According to this document (, MediaDRM is added in API level 18, which is Jelly Bean(4.3.x). So, 4.0.2 doesn’t support this functionality.

Second issue is because of OpenGL ES enabling problem. Although I enabled every options for it, It didn’t work. Even OpenGL ES example didn’t work. It could be Linux emulator’s problem. This doesn’t happen in real device.

Next step is comparing Chromium WebView source code and WebView implementation in Android OS.

Android Source Code

I downloaded android source code.

$ mkdir android
$ cd android
$ repo init -u
$ repo sync

The size is 29GiB. I had to use additional USB HDD.

I simply checked where the WebView source code is.

WebView part is : android/frameworks/webview

However, in this folder there was only 20+ source codes that are similar to part of chromium source files. These codes looks like only interfaces. If so, somewhere in 29GiB where will be actual browser code.

A comment in says : # This package provides the ‘glue’ layer between Chromium and WebView.

Therefore, below is glue layer

Android : android/frameworks/webview/chromium
Chromium : src/third_party/android_tools/sdk/sources/android-21/com/android/webview/chromium

Real Chromium source is

Android : android/external/
Chromium : src/

However the folder size is only 1.3GiB. It should be around 13GiB.
When I compared it, there were many difference.
I guess, google ported most recent stable version of chromium for Android WebView framework.

By the way, and was identical (sources caused 4.0.2 issue).
Their path was:

hosung@hosung-Spectre:/media/hosung/SSD/android/external/chromium_org$ find . -name MediaDrmBridge*
hosung@hosung-Spectre:/media/hosung/SSD/android/external/chromium_org$ find . -name

by Hosung at December 06, 2014 02:00 AM

Hunter Jansen

Bowtie Final

Bowtie Final

Written by Hunter Jansen on December 06, 2014

So this is going to (probably) be my final post on bowtie and as this (my final) semester wraps up, the last post for SPO600 at Seneca College. There’s not really too much left to say, so this post is most likely going to be shorter than the past few.

Since my last post, I’ve gone through testing bowtie as much as I could think of, ensuring all the command I know of work after my changes and testing performance with and without my changes (and with the fedpkg version). I’ll post briefly on my findings on that, and round everything off with some final thoughts.


In the last post, I mentioned that I’d be reaching out to upstream to ask them about some changes to the makefile as well as just to touch base to declare my intentions and say hi. I reached out via their sourceforge page which appeared to be the only real avenue of conversation for the project, but have yet to hear back. In my research along the ways, I’ve seen that the bowtie team is commonly unresponsive to issues and communications - so this isn’t very surprising, but I guess a little disappointing.

It occurs to me now that I haven’t written any updates since properly updating my code to include ifdefs - so I appologize for that. In the end, it didn’t actually end up being very much work to get those in there, and I learned that the compiler actually creates a variable called aarch64 that you can check against on arm64 systems. Instead of writing a large post about content that’s largely self explanatory, I’ll just link to my github repo commit.

The main files to look at are ebtw.h and third-party/cpuid.h, for a bit more indepth reasoning behind the code, you can read the previous entry.


So, as part of me accepting that I’ve changed things in a way that doesn’t break the existing implementation, I had to run some tests to make sure that not only does stuff work, but it works at or better than the existing solution.

Unfortunately, bowtie doesn’t have an existing test suite, so the way I went about testing everything was by running a few of the more common commands in the getting started guide on the following setups:

  • The fedpkg code that I got on x86
  • The bowtie repo from upstream on x86
  • The updated code from me on x86
  • The updated code on arm64

The main way I did this was with the time command in a short for loop ala:

time for i in {1..10}; 
time ./bowtie e_coli reads/e_coli_1000.fq; 

This essentially just runs the e coli reader 10 times and outputs the total time and the average time.

So here’s the results for that command. (Note that I did perform similar tests on other commands, but for brevity I’m just including this one):

  • Fedpkg:
real	0m0.074s
user	0m0.041s
sys	0m0.020s

real	0m0.717s
user	0m0.390s
sys	0m0.186s
  • clean repo x86
real	0m0.116s
user	0m0.080s
sys	0m0.023s

real	0m1.166s
user	0m0.757s
sys	0m0.268s
  • Updated repo x86
real	0m0.070s
user	0m0.037s
sys	0m0.019s

real	0m0.745s
user	0m0.402s
sys	0m0.194s

  • Updated repo arm64
real	0m0.120s
user	0m0.080s
sys	0m0.020s

real	0m1.210s
user	0m0.810s
sys	0m0.190s

Sooooo, everything looks fine from that. Oddly, the updated code is more on par with the fedpkg version than the unadultered git version. Also, an important thing to note is that the arm64 machine has consistently had slower execution times for everything throughout all these experiments, not just on bowtie - so it’s not necessarily slower on arm64, just on this machine.


So with all that done up, the final step was to remove all the extra things that got added along the way via running some of the testing bowtie commands.

I also decided to take into consideration that the built version of bowtie I should push would be the x86 version, as it’s currently still primarily an x86 run program.

Following both of these, I made one final commit and sent the pull request with results awaiting. If there are any revisions required by upstream, I probably won’t hear about them until after the semester’s over, but I’ll post updates on here regardless of the outcome.

Final Thoughts

It’s almost kind of bittersweet to be done with bowtie and spo600, especially as it’s really the last work I’ll be doing during my time at Seneca college. Even though my trajectory of front-end web development probably won’t use anything learned in the course, it’s still a neat tool and important knowledge to store in my utility belt of programming stuff.

As I mentioned earlier, if bowtie needs more work to get into the the upstream I’ll finish it off and post about it here. I’ve been really wanting to contribute to an open source project for a long time, but have been shy about it for a bunch of self conscious reasons. So while this is only my first contribution to the world, hopefully it’s just the start of many to come to all sorts of interesting projects.

Thanks to Chris for being such a knowledgable prof and quickly providing help and feedback wherever it was needed, and thanks to whoever else ended up reading these posts, it was truly an experience!

Done for now, but until next time

December 06, 2014 12:00 AM

December 05, 2014

Jordan Theriault

Release 0.4 – Map Block Pull Request

Map Block

I have submitted a pull request for the Map Block in Mozilla’s Webmaker App. This component was extremely helpful in learning nuances of programming with Node.js, Vue.js and Leaflet.js. Currently it will locate on a map based on latitude and longitude coordinates and display a marker on the specified location. You can optionally add information to a popup window which will give further information the user can input like address or address title.

The Map brick at this moment is the most complex and most memory-consuming brick in Webmaker App but in the future additions may be able to made to reduce it’s load on network such as better caching of the map tiles. However, making less network work means that the application will be more bloated so that is a trade-off to be explored in future fixes.

There were many issues, but to highlight:

  1. Choosing map provider was a very difficult process which took a great deal of research into licensing and map tile servers. In the end, I decided to use MapQuest as there are no limits on the use and is free for anyone to use within their application. Since what Webmaker App does is allow others to create their own applications, this was the most permissive server to use which will allow people enabled with creating their own apps, to not worry about licensing.
  2. Using maps in a dynamic application was difficult due to re-loading of the page and the fact that it is run on Vue.js which manages the presentation. Mitigating the presentation of the map with Leaflet involved re-setting the map during page re-loads.
  3. Multiple maps on the same page was perhaps one of the most difficult tasks to solve. I used the singleton paradigm to allow access to Leaflet through all blocks within the app. However, using a unique ID based on the newly created block for the map div and putting the map object within the module data in Javascript allowed me to have multiple maps for a single app, and multiple apps with maps.
  4. Leaflet has a strange proprietary statement for referencing the marker icon resource. However since Leaflet is imported as a node module, once the project compiles the node modules folder is no longer accessible. Therefore, I had to move the marker icon images into the static resources folder and reference this folder.

All in all, I am very happy with the map block. But for the future I would love to add geocoding in order to get longitude and latitude coordinates from an address and possibly customize the visuals.

You can check out the pull request here.

by JordanTheriault at December 05, 2014 11:05 PM

Linpei Fan

Changes in Open Source in Last 3 Months

In this semester, I had the opportunity to take David Humphrey’s OSD600. In this course, I learned the culture of open source community and how to involve in the development of an open source project. Open source model becomes more and more popular. And a lot of companies have open-sourced the software. Although, it’s been only three months since I started learning open source development. The changes in the open source community are large and obvious.

In this October, the 13th FSOSS conference was held in Seneca@York campus. I participated as a volunteer. There were more than 200 registered audience, nearly a hundred more than the number in last year. From the number of FSOSS participants, we can see there are more and more people interested in the open source.

Moreover, the biggest news in open source community during last three month was that Microsoft announced that it made the full .NET server core stack open source for cross-platform Nov. 12, 2014. Developers can begin engaging with breadth of .NET open source project at Github. It uses MIT License, which is one of most popular license in open source projects, instead of its own open source license in the dotnet project.  That was the big step for Microsoft in the open source community. It is also a big sign that open source is getting widely recognized and adopted as an efficiency development model and in the software development industry since Microsoft, the top company leads the development model in the industry always, starts moving steps into the open source community. It is anticipant that there will be more and more companies and developers will participate the open source community.


by Lily Fan ( at December 05, 2014 08:22 PM

December 04, 2014

Jordan Theriault

Map Block – Attributions

As I am furthering my development of the map block for Mozilla’s Webmaker App, I have come into an important decision of it’s development: choosing tile provider. Running a server to deliver tiles is not an easy task and depending on how widely Webmaker App is used, it may be necessary to start a tile server on the Mozilla network. However, until then there are a few free alternatives.

I decided to use Leaflet.js to develop this map block, and thankfully it makes it extremely simple to switch between tile servers.

Open Street Maps (OSM) is a great source for map tiles and for testing I have used them up until now. However their tile server is forbidden for use in applications as per their terms. Therefore based on recommendations on their website I have turned to use MapQuest’s tile server which, to my surprise, is provided free for use in applications as long as they are properly attributed.

I have seen many suggestions on the Webmaker App issues page regarding lookup from address to coordinates and many other options. But the reality is, these features require services, and services require licenses.

At this point as an open source project the map block is limited to only locating based on longitude and latitude coordinates until an agreement is sourced out with Google who seems to be the leading force in address-to-coordinate technology.

I only have a few more things to fix for this block to work in a basic form:
– Fix strange bug of marker icon not loading (could be a node package path error)
– Allow multiple map blocks to work on the same app (if you have experience with this, please help me.)
– Fix leaflet attribution on edit page so it doesn’t overlap other content

If you would like to comment or give me some help, you can check out my progress on the branch mapblock-leaflet.

by JordanTheriault at December 04, 2014 10:43 PM

Yoav Gurevich

The Grand* Finale

Pull Requests Associated with this Milestone:

Filer Issue #303 PR
Appmaker Issue #2338 PR

With the hope of not sounding overly dramatic, past week and a half has been quite overwhelming in just about every direction in my life. Pertaining to my plan for the 0.4 milestone though, my original plan for fully realizing a live iframe instance of the remixed app template was deemed both too onerous and unnecessary for its function due to factors such as the ever-changing Webmaker tools pages and assets that would make for a very volatile environment to try and parse or retrieve data objects reliably for a long period of time. My 0.3 implementation is apparently more than sufficient in its current state, and all that was requested of me (as briefly described in my earlier post) was to fix up the URL reference to an HTTP route that they will add functionality to in the near future, and merge my solution shortly thereafter.

This turn of events has left me scrambling for a suitable alternative in a very constrained and stressful period of time. I quickly perused through a handful of Mozilla project code bases I'm familiar with for anything of remote reasonableness to my sought after goals for this semester's end. I quickly recalled leaving behind a very small Appmaker bug I was assigned a month or so ago by Scott Downe involving a couple value setting additions in JSON objects in lieu of my 0.3 milestone. Most of the effort for implementing this involved asking Scott where the JSON blobs were located in the codebase so that I can simply inject the properties and call it a day. This was a start, but something more sizable and challenging needed to complement it for a respectable release.

It was at this point that I turned to an early project I contributed a little code to at the start of my work with CDOT this past summer - Filer. An issue Mozillian coding tour-de-force @Modeswitch put up dealing with rebuffing an old performance test of his for this module in order to be run both in node and in the browser, as well as cater to a provider-agnostic configuration seemed to fit the bill. This required a complete reintroduction into my knowledge of Browserify, as well as a bit of a refresher crash course into some filer methods and invocations. I needed help with this one, and thankfully, I managed to get some time and guidance from former CDOT team members Gideon Thomas and Kieran Sedgwick on my options and concerns of how to get started and solve this bug. As per the link on the top of this blog post, a pull request is already up, and I'm still not 100% sure of my solution, but after requesting review from all the major Filer contributors, so far the only corrections I've had to make were minor. I am waiting for good news either way - be it more input on how to complete this if anything was left out, or an "r+" and a merge into the master branch.

I wish I could have done something even more substantial to conclude my time in this course. It's been inspiring, and it's also been a wonderful opportunity for me to keep my open-source skills sharp from my summer position. The open source philosophy and its realization has now left me spoiled; I often seem to feel that the work I do has genuine impact, and as a person who's no stranger to needing assistance to overcome logical hurdles in the realm of programming principles, it is available and many times happy to try and turn my shortcomings into strengths. I will inevitably get back to this type of work in one of its many shapes. I'm hoping to acquire a post-graduate position at CDOT to further enrich my experience with this work, and my upcoming group implementation project is heavily influenced by open source workflows and intends on emulating those in order to achieve greater and more efficient productivity in its development lifecycle.

It has been nothing short of a pleasure to be a part of David Humphrey's world, whether as a student or a research assistant. I owe this man much gratitude for extending and bequeathing his passion unto me. I do not take that lightly. I very much hope to still remain a part of it for years to come.

by Yoav Gurevich ( at December 04, 2014 09:33 PM

Edwin Lum

Eigen3 project conclusion!

As the semester draws to an end, my project heads towards a close as well. From the last post, I was ready to reach out to the upstream community and touch base regarding the status of the package on ARM64, as well as to show them the results of the test suite which ran to a 98% success score. I have to admit, it was my first time subscribing and using a mailing list to contribute and as such I was a little unsure and lacked confidence so I took a little longer before finally sending my results and touching base with everyone. Lucky for me, the experience was very positive for me, I got a response from a fellow named Konstantinos within a couple of hours, and it brought good news. It turns out that he had done the original NEON(ARMv7) port a few years ago, and around a month back, posted a new patch to extend to 64-bit to deal with utilizing ARM64 Advanced SIMD (aka ARMv8 NEON).

With the link to the patch, I had to take a look and investigate what changes were made to the code. Since my approach at the beginning was to look for inline assembler code in the source code, and found none, my interest was piqued as to what was changed. From his explanation in the email, all he had to do was to extend the existing to use the new intrinsics for the 64-bit datatypes ([u]int64_t, double).

Upon looking at the code linked, which totaled to around 200 lines changed, and they were contained in 2 files: complex.h and packetmath.h. From what I can make of it, like Konstantinos explained, the patch extended the code to take advantage of the new intrinics introduced for the two new datatypes. This is an extension of the code to take advantage of the new intrinsics introduced in ARMv8 NEON, and as such did not come up in our search for inline assembler because it is a new addition.

So with this email exchange, the confirmation that the 98% pass is an acceptable value and that the port is considered stable enough; it is fair to consider that the Eigen3 is now supported on Aarch64. The next and final step would be to mark it as complete on linaro, and to mark the project as completed successfully :)

A link to the archive of the mailing list can be found here.

by pyourk at December 04, 2014 05:35 PM

Shuming Lin

Firebug – Web Development Evolved

What is FireBug?

“Firebug integrates with Firefox to put a wealth of web development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page”

The FireBug is open source, you can find it in GitHub.

The FireBug created by Joe Hewiit. Joe Hewitt is a software programmer and who is best known for his work on the Firefox web browser. The FireBug written in JavaScript, XUL and CSS and it’s Mozilla extension. It under New BSD License.

Some Feature of FireBug that I think is pretty nice :

  • Monitor network activity
    Some of web pages are taking a long time to load, but why? The FireBug can help track it. Break it down by type and Watch the timeline unfold
  • Debug and profile JavaScriptFind scripts easily and Pause execution on any line
  • Explore the DOM
    The Document Object Model is a great big hierarchy of objects and functions just waiting to be tickled by JavaScript. Firebug helps you find DOM objects quickly and then edit them on the fly.
  • Execute JavaScript on the fly
    a good ol’ fashioned command line for JavaScript complete with very modern amenities.

The FireBug is pretty cool tool to help you work on JS development. Also, there has many features which you might find it in FireBug web.

by Kevin at December 04, 2014 06:12 AM

Hosung Hwang

CordovaStabilizer – Chromium Android WebView Issue 4

1. Test OpenGL Example in the Emulator
I downloaded OpenGL example from this link.
Both OpenGL ES 1.0, OpenGL ES 2.0 examples crashed, crash point was the same as Chrome WebView.
I enabled GPU Emulation.
2. Emulator Acceleration setting
For Virtual Machine Acceleration, this link says

  • x86 AVD Only – You must use an AVD that is uses an x86 system image target. AVDs that use ARM-based system images cannot be accelerated using the emulator configurations described here.
  • Not Inside a VM – You cannot run a VM-accelerated emulator inside another virtual machine, such as a VirtualBox or VMWare-hosted virtual machine. You must run the emulator directly on your system hardware.
  • Other VM Drivers – If you are running another virtualization technology on your system such as VirtualBox or VMWare, you may need to unload the driver for that virtual machine hosting software before running an accelerated emulator.
  • OpenGL® Graphics – Emulation of OpenGL ES graphics may not perform at the same level as an actual device.

So, I made an Android 4.4.2 emulator for x86 target. Previously, all emulators were for ARM.
However, now all versions of Chrome android(WebView Shell, Chrome Shell, Content Shell) didn’t work.
Crash point was not meaningful.
Maybe I need to forget about using Emulator.

by Hosung at December 04, 2014 12:47 AM

December 03, 2014

Hosung Hwang

CordovaStabilizer – Chromium Android WebView Issue 3

When I run Chromium WebView, there was a crash in the Android4.0.2 device. And andother crash was in the Android4.4.2 emulator. Today, I Found these two crashes are different.

1. Crash in Android 4.0.2 device

Logcat crash call stack part:

dalvikvm: threadid=1: thread exiting with uncaught exception (group=0x40ab1210)
AndroidRuntime: FATAL EXCEPTION: main
AndroidRuntime: java.lang.VerifyError: org/chromium/media/MediaDrmBridge
AndroidRuntime: at org.chromium.android_webview.AwBrowserProcess.initializePlatformKeySystem(
AndroidRuntime: at org.chromium.android_webview.AwBrowserProcess.access$000(
AndroidRuntime: at org.chromium.android_webview.AwBrowserProcess$
AndroidRuntime: at org.chromium.base.ThreadUtils.runOnUiThreadBlocking(
AndroidRuntime: at org.chromium.android_webview.AwBrowserProcess.start(
AndroidRuntime: at
AndroidRuntime: at
AndroidRuntime: at
AndroidRuntime: at
AndroidRuntime: at
AndroidRuntime: at
AndroidRuntime: at$600(
AndroidRuntime: at$H.handleMessage(
AndroidRuntime: at android.os.Handler.dispatchMessage(
AndroidRuntime: at android.os.Looper.loop(
AndroidRuntime: at
AndroidRuntime: at java.lang.reflect.Method.invokeNative(Native Method)
AndroidRuntime: at java.lang.reflect.Method.invoke(
AndroidRuntime: at$
AndroidRuntime: at
AndroidRuntime: at dalvik.system.NativeStart.main(Native Method)
ActivityManager: Force finishing activity crash position (MediaDrmBridge.addKeySystemUuidMapping):

public abstract class AwBrowserProcess {
private static void initializePlatformKeySystem() {
String[] mappings = AwResource.getConfigKeySystemUuidMapping();
for (String mapping : mappings) {
try {
String fragments[] = mapping.split(“,”);
String keySystem = fragments[0].trim();
UUID uuid = UUID.fromString(fragments[1]);
MediaDrmBridge.addKeySystemUuidMapping(keySystem, uuid);
} catch (java.lang.RuntimeException e) {
Log.e(TAG, “Can’t parse key-system mapping: ” + mapping);

Crash point was “MediaDrmBridge.addKeySystemUuidMapping(keySystem, uuid);” line.
keySystem value and uuid value was valid. – MediaDrmBridge.addKeySystemUuidMapping :

* A wrapper of the android MediaDrm class. Each MediaDrmBridge manages multiple
* sessions for a single MediaSourcePlayer.
public class MediaDrmBridge {
public static void addKeySystemUuidMapping(String keySystem, UUID uuid) {
ByteBuffer uuidBuffer = ByteBuffer.allocateDirect(16);
// MSB (byte) should be positioned at the first element.
nativeAddKeySystemUuidMapping(keySystem, uuidBuffer);

This function is public static void function.

I commented this line, and compiled. And then there was no crash. It seems fine.

//MediaDrmBridge.addKeySystemUuidMapping(keySystem, uuid);

MediaDrmBridge class is related to drm video play, and it seems to be setting system unique ID. However, when I checked these values:

fragments[0] : : com.oem.test-keysystem
keySystem : : com.oem.test-keysystem
uuid : : edef8ba9-79d6-4ace-a3c8-27dcd51d21ed

These values were the same in devices and emulators.

Btw, the fact that calling public static void function is failed looks weird. If it is the same as C++ static class member, code area in the memory could be written by another problem. Or it could be the problem of android library, and this MediaDrmBridge might be a part of android implementation. The comment of the file says Copyright 2013. If so, this line cannot run in 4.0.2, which is from 2011.

2. Crash in the emulator

In the emulator, the crash point is after the first crash point.

LogCat crash call stack:

D/gralloc_goldfish( 1069): Emulator without GPU emulation detected.
E/AndroidRuntime( 1069): FATAL EXCEPTION: GLThread 82
E/AndroidRuntime( 1069): Process:, PID: 1069
E/AndroidRuntime( 1069): java.lang.IllegalArgumentException: No configs match configSpec
E/AndroidRuntime( 1069): at android.opengl.GLSurfaceView$BaseConfigChooser.chooseConfig(
E/AndroidRuntime( 1069): at android.opengl.GLSurfaceView$EglHelper.start(
E/AndroidRuntime( 1069): at android.opengl.GLSurfaceView$GLThread.guardedRun(
E/AndroidRuntime( 1069): at android.opengl.GLSurfaceView$
W/ActivityManager( 386): Force finishing activity
D/dalvikvm( 1069): GC_FOR_ALLOC freed 121K, 10% free 3035K/3356K, paused 21ms, total 23ms
W/InputMethodManagerService( 386): Focus gain on non-focused client$Stub$Proxy@b2d86ca8 (uid=10055 pid=1069)
I/Choreographer( 1069): Skipped 38 frames! The application may be doing too much work on its main thread.

The log says “Emulator without GPU emulation detected”.
So, I made new emulator with GPU emulation, but the result was the same. In this case “Emulator without GPU emulation detected” sentence doesn’t show.

Even when I run the emulator using this command according to this page(

$ emulator -avd And422EL -gpu on

the result was the same.

The possibility is high that this is the problem of emulator and OpenGL problem. Or OpenGL API version mismatch problem in WebView Shell Implementation.


1. Make Android 4.0.2 and previous version emulator, and check crash point if first solution works.

2. check Old version Chromium source code, if it works in 4.0.2 and if it contains the line that cause crash. (I wonder if it need)

3. Check if this code is used in Chromium content shell and chrome shell.

4. Research about MediaDrmBridge.

5. Emulator GPU/OpenGL support check. Run OpenGL example in the emulator.

by Hosung at December 03, 2014 05:34 AM

Jordan Theriault

Back to the Map Block

A month ago I had decided to take on creating a map “block” for Mozilla’s Webmaker app. I have since made two other contributions to Webmaker app and shelved the Map brick for a later date due to it’s complexity and time constraints. I’ve decided to take upon the brick as it is one of the most requested features by users and intend to deliver the Map brick in a simple method in order to allow more additions to the brick later on.

I have written the changes I had made to my previous “leafbrick” branch overtop of the newest version of Webmaker app and created a new branch “mapblock”.

Further, my addition to the Gitbook for Webmaker App Web IDE development can now be viewed here.

by JordanTheriault at December 03, 2014 04:55 AM

December 02, 2014

Hosung Hwang

CordovaStabilizer – Chromium Analysis – WebKit

This video gives very useful information about how google changed WebKit for Chrome. Chrome still use WebKit API, this part is important.

In 2009, when Google started to build chromium, WebKit was in src/WebKit. However, when I see the source codes, it was now moved to src/third_party/WebKit directory.

  • Dependencies:
    JavaScriptCore/wtf : utility layer
    V8 : Javascript engine
    Skia : graphics engine(win,linux). Mac : CG
    GoogleURL : URL
    icu, libxml, etc.

Interesting Stories :

  • Google avoid to use STL containers because of the problems of memory allocation, poor implementation, etc.
  • They valued source compatibility rather than binary level. Therefore, avoid to use COM in Windows implementation, which is good because it is easier than analysing complex binary standards.

by Hosung at December 02, 2014 02:01 AM

CordovaStabilizer – Chromium Analysis – Build (GYP and GN)

1. GYP (Generate Your Project)

Useful Video :

  • gclient runhooks –force : generate
    Windows : .sln, .vcproj (for Visual Studio)
    Mac : .xcodeproj (for XCode)
    Linux : SConstruct, (target)_main.scons
    each target -> .scons
  • .gyp exists in all major directorybuild/all.gyp : mast file that pulls in all of the .gyp files in the directory tree
    build/common.gypi : gyp include. common build settings. included by all .gyp files
    include global settings
    build/external_code.gypi : include outside chrome project. third-party code
    build/linux/system.gyp : system specific build configuration
  • Basic Format ( Python dictionary dump format )
    ‘variable’ : {
    #conditions. file name
    ‘includes’ : {
    #include gyp files
    ‘target_defaults’ : {
    #every targets
    ‘target’ : [{
    #target_name, type, dependencies(dictionaries), conditions(dictionaries, depending on OS),
    ‘conditions’ : {
    //platform context

I tried to start from all.gyp. However, it has 1388 lines and connected file contains more files.

And I tried to see tree of all gyp files through command

$tree -f -P “*.gyp” | grep “.*gyp$” > gyptree

There were 1058 .gyp files.

Anyway, I think this will be the starting point to break down source codes.

2. GN

Outputs Ninja build file.
20x Faster than GYP
Chromium team plans to change all GYP build script to GN.

===to be continued===

by Hosung at December 02, 2014 01:38 AM

CordovaStabilizer – Using Eclipse IDE for building Chromium

If I could use Eclipse to build, debug and navigate source code(using functions like “go to definition” and “open resource”) of chromium, the analysis would be much faster than using text editors.

A developer in Google explained how to develop chromium in eclipse.

This document was written in 2011, and kept updating until now. It seems reliable. So I decided to do it. He explains for both googlers who use goobuntu and ungooglers who use other Linux.

Many setps are eclipse settings due to massive size of source code.

1. Download Eclipse

He recommended Kepler.

download Eclipse Standard 4.3.2

2. Unpack and edit eclipse/eclipse.ini to increase the heap available to java. to prevent memory error
Minimum heep : -Xms40m -> -Xms1024m
Maximum heep : -Xmx512m -> -Xmx3072m

3. Turn off Hyperlink detection in the Eclipse preferences to prevent editor tie up
: Window -> Preferences, search for “Hyperlinking, and uncheck “Enable on demand hyperlink style navigation”

4. From the Help menu, select Install New Software…
add and install Main & Optional features.

==restart eclipse==

5. C++ Perspective setting

Window > Open Perspective > Other… > C/C++
select C++ perspective

Turn off automatic workspace refresh and automatic building, as Eclipse tries to do these too often and gets confused:

6. Build setting

Open Window > Preferences
Search for “workspace”
Turn off “Build automatically”
Turn off “Refresh using native hooks or polling”
Click “Apply”
Create a single Eclipse project for everything:

7.Make Project

From the File menu, select New > Project…
Select C/C++ Project > Makefile Project with Existing Code
Name the project the exact name of the directory: “src”
Provide a path to the code, like /work/chromium/src
Select toolchain: Linux GCC
Click Finish.

In the status bar of eclipse, shows “Refreshing workspace” and then “C/C++ Indexer:90%”.
It was 4 hours ago. This indexing takes more time than compiling.

8. Source filter setting

In the Project Explorer on the left side:

  • Right-click on “src” and select “Properties…”
  • Open Resource > Resource Filters
  • Click “Add…”
  • Add the following filter:
    • Include only
    • Files, all children (recursive)
    • Name matches .*\.(c|cc|cpp|h|mm|inl|idl|js|json|css|html|gyp|gypi|grd|grdp|gn) regular expression
  • Add another filter:
    • Exclude all
    • Folders
    • Name matches out_.*|\.git|\.svn|LayoutTests regular expression
      • If you aren’t working on WebKit, adding |WebKit will remove more files
  • Click “OK”

====Mon Dec 01 20:11:57 EST 2014====

I am still waiting for indexing. next step will be continued.

by Hosung at December 02, 2014 01:16 AM

December 01, 2014

Andrew Smith

This causes cancer and birth defects, but only in California

I was looking for some tire repair stuff and happened to come across this:

Celifornia cancer front

It doesn’t matter whether you know anything about tires or not, it’s the back that’s interesting:

Celifornia cancer back

“This product contains chemicals known […] to cause cancer and birth defects”

But that warning only applies if you’re in California. Or else why did they waste the ink to print “known to the state of California”? :)

by Andrew Smith at December 01, 2014 07:38 PM

Gary Deng

Working on OSD600 Release 0.4

After the long silence (MozFest + vacations) from my previous release 0.2 pull request, I got some feedback from Mozilla Appmaker team. Apparently, my code doesn’t meet their requirements. I decided to continue working on the same bug as my Release 0.4 project. The following is the suggestion from appmaker team:

If we want to remove a known active listener from the channel map, for instance, we should give the channel map a function like removeActiveListener and then call channelMap.removeActiveListener(…) on it, rather than extracting all the data from the various sources into local scope, processing then, and then setting them back on remote scope. Ideally, no extraction happens, we just tell the relevant components to run their own local function on their local data, based on passed parameters.

During last weekend, I spent a couple of hours to review my code and tried to figure out how to refactor it; however, it is not easy to accomplish the task. First of all, I haven’t looked at the code for about a month. I had to start from scratch again. Time is flying, and I only have one week left for this semester. I am going to learn more advance JavaScript skills. And I hope I am able to fix this bug by the end of exam week.

by garybbb at December 01, 2014 03:57 PM

November 30, 2014

Hosung Hwang

Everyday Text File and Emacs LISP customization

When I saw this article, I was really glad because I thought that I was the only person who use everyday text file. And I could get some tips that I can try in the future.

Ten Clever Uses for Plain Text Files That Can Increase Your Productivity

When I was a Windows programmer, I used Total Commander and UltraEditor as a file manager and text editor. And I made a simple program that makes a text file as a name of the day’s date. For example, diary09_23_2014.txt. Also, in UltraEditor, there was the functionality to insert date and time to the cursor position. I used this combination for a long time to write my thinking, research results, and piece of code.

When I started to use Linux, I had to find a good file manager and text editor for this purpose. For file manager, I tried several programs including Midnight Commander, Gnome Commander and Krusader. Between them, Krusader was the best for me. It is very stable and has many functions including FTP/SFTP connection.

Screenshot from 2014-11-30 16:19:31

And then, I had to find text editor. I tried many editors: GEdit, Tomboy, Geany, VI, VIM etc. And I decided to use emacs because I knew that emacs is world most customizable editor. At the same time, I knew that it was world most difficult editor to use. So, I spent more than 1 week only to study how to use it. I used a book called “Learning GNU Emacs”. There are still a lot to study.

With emacs, almost everything is possible using LISP script. I haven’t learn LISP yet. However, I could make simple functions using basic LISP syntax and additional bash script for my text diary purpose.

The bash script ~/sh/ is for making a text file “~/diary/diary11_30_2014.txt” with today’s date, and then put “====Sun Nov 30 15:56:59 EST 2014====” inside the file(end of the file).

_now=$(date +”%m_%d_%Y”)
_now2=$(date +”%a %b %d %H:%M:%S %Z %Y”)
echo “====$_now2====” >> $_file
echo $_file

If I learn LISP more in the future, I could change it to LISP function. However, the benefit of this bash script is that it can be used outside the emacs.

Following is functions in ~/.emacs file.

;;;;;;;;;;;;;;;;open diary file
(defun diary ()
(find-file (substring (shell-command-to-string “~/sh/”)0 -1) )
(rename-buffer “diary”)
)(defun diary-search (newnote)
(interactive “sSearch: “)
(format “grep ‘%s’ ~/diary/*.txt” newnote))

(global-set-key (kbd “C-S-d”) ‘diary)

;;;;;;;;;;;;;;;;insert date and time
(defvar current-date-time-format “====%a %b %d %H:%M:%S %Z %Y====”
“Format of date to insert with `insert-current-date-time’ func
See help of `format-time-string’ for possible replacements”)(defun insert-current-date-time ()
“insert the current date and time into current buffer.
Uses `current-date-time-format’ for the formatting the date/time.”
(insert (format-time-string current-date-time-format (current-time)))
(insert “\n”)

(global-set-key “\C-c\C-d” ‘insert-current-date-time)

insert-current-date-time function is modified from this link.

When I push Ctrl-Shift-D, diary() function makes a text file if there is no today’s diary file, and open it with the buffer name “diary”. If there is today’s file, put current date and time at the end of the file and move the cursor to the last position.

Screenshot from 2014-11-30 16:22:09
“M-x diary-search” is for searching text from text diaries inside ~/diary directory using grep shell command. Like below. I can open the file and see the contents by clicking it in the result list.
Screenshot from 2014-11-30 16:23:19

“C-c C-d” is to insert current time to current cursor position while editing diary file.

Screenshot from 2014-11-30 16:29:00

If I make a text file everyday, after several years, there are so many files. So, I merge the files to 1 file per 1 month or 1 year. Because inside the diary, there are year, date and time in every section. Search works the same, so whether it’s day file or month file doesn’t matter.

Also It can be synchronized to evernote, dropbox etc., to search from my phone.

by Hosung at November 30, 2014 09:47 PM

Jordan Theriault

Study: Compiling Mozilla Firefox


Compiling a web browser is not a task programmers typically think of doing unless they are writing changes to it and directly involved in the development of the browser. However, compiling a web browser can be a very educational and interesting experience in what technologies, processes, and code base size is involved in compiling browser builds.

In this case, I will be compiling Firefox on OSX 10.10 (Yosemite) and documenting my process as well as the resources I used to achieve it the compilation.

Typically wget is used to retrieve the development environment. However, with the introduction of OSX 10.10 wget was removed in favour of curl so it needs to be installed.

Therefore, the first step is to install wget by installing the Xcode command line tools. These tools are important for any developer using OSX and you may already have them installed try typing $ wget into terminal. If it is not yet installed, you will need Xcode which requires a developer account to download. In my case, I already had Xcode installed so I installed the command line tools via terminal with $ xcode-select --install.

Once the build environment is downloaded and installed, I downloaded the Firefox git repository. This repo has over a million objects and takes some time to download. I prefer using git, so I download it with $ git clone This repo is called “gecko-dev” because the engine Firefox uses is the gecko engine originally used by Netscape.


Once downloaded, using terminal I navigated to the location I cloned the repository to. From this point, mach is used to build Firefox. This process can take several hours. In my case on a 2009 Macbook Pro 2.66GHz/8GB took approximately two hours to build.




To build Firefox, run $ ./mach build

Once built, I received the below message. To run the build, the command ./mach run opened the nightly build.

This process, although not complex, highlighted the enormity of Firefox’s repository and build processes. Gecko has existed since 1997 and has a big community of contributors who are constantly adding code and tweaking existing code. These additions mean Firefox is only growing larger and while it uses the Gecko engine, the compilation process will only grow more complex.

Simple Firefox Build
Linux and MacOS Preparation

by JordanTheriault at November 30, 2014 06:01 PM

Shuming Lin

Build and Run Open Source Browsers Firefox

Compare with two open source browsers which Chrome and Firefox. I find out Firefox build introductions is more clear than Chrome and  easy to follow. Therefore, I choose to build and run Open Source Browsers Firefox in Window 8.1. To build and run Firefox open source browser, it almost took four hours to get it run without any build failed situations.

Firstly, Firefox build prerequisites

Hardware Requirements ( Recommended from Mozilla):

  • Recommended: 4GB of RAM (having only 2GB RAM and 2GB swap may give memory errors during compile)
  • High speed internet

My PC:

pc info

Setup environment for Firefox. There has two option windows, Linux and MacOS build prerequisites. If you are use Linux or MacOS you can choose Linux and MacOS build preparation. Because I am using windows so I follow Windows build prerequisites introductions to set up.


  1. Download and install VS2013 for windows desktop if you don’t have. If you have VS2010 or VS2012, you don’t need to download new version. What you need is follow introduction to change files or upgrade
  2. Download and install MozillaBuild package
  3. Open windows command (windows + R, type cmd ), and then go to c:\mozilla-build. Run start-shell-msvc2013.bat (There also have the batch file for VS2010 and VS2013)


Secondly, Get the source code by using command (  hg clone ). It’s take me hours and half to load source code. You may try to download a ownload a Mercurial bundle file instead of waiting for “hg clone”.


Thirdly, after finished load Firefox source code, we can start the build. use command ” cd mozilla-central” and run command ” ./mach build” to build Firefox. It would take more than one hours  to build it. Until you get a message ” build finally finished successfully! “


Finally, we can run it.


When you set up environment, you need to read introduction carefully. The introduction will help you a lot.

by Kevin at November 30, 2014 07:14 AM

Know More About JSON

JSON (JavaScript Object Notation) is a lightweight data-interchange format. I knew it, when I work with “mozillafordevelopmenthref/webmaker-app“. Before I know nothing about JSON and how it works, so I learn it during work with webmaker-app open source.

JSON is easy  to read and write and it is based on a subset of the JS programming Language. JSON like universal data structures.  Also, JSON is built on two structures:

  • A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
  • An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.

In JSON, they take on these forms:

An object is an unordered set of name/value pairs. An object begins with { (left brace) and ends with } (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma).


An array is an ordered collection of values. An array begins with [ (left bracket) and ends with] (right bracket). Values are separated by , (comma).


A value can be a string in double quotes, or a number, or true or false or null, or an objector an array. These structures can be nested.


A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string. A string is very much like a C or Java string.


A number is very much like a C or Java number, except that the octal and hexadecimal formats are not used.



This will help us to know basic things about JSON.


by Kevin at November 30, 2014 06:18 AM

November 29, 2014

Hosung Hwang

Clone HDD(SSD) in Ubuntu using Live USB

My SSD in my labtop is 128GB, it is quite small.
Chromium source codes including object files are over 25GB.

So I decided to upgrace SSD to 512GB.

I booted from ubuntu 14.04LTS live USB, and put external SSD drive to USB3.0 port.

1. Check partitions

According to this partition, I planned to do fdisk and dd.

$ sudo fdisk -l

Screenshot from 2014-11-29 21:26:27

Oops, it is GPT.

2. Copy partition table

ubuntu@ubuntu:~$ sudo sgdisk -R/dev/sdc /dev/sda
Screenshot from 2014-11-29 21:35:19

Screenshot from 2014-11-29 22:40:14

/dev/sda is 128GB drive, /dev/sdc is 512GB drive, and /dev/sdb is usb drive.

Partition tables copied.

3. Copy disks.

I copied each partitions using dd command.

Although I used a usb port on my laptop that supports USB 3.0 (Super Speed), it takes very long time.

Screenshot from 2014-11-29 22:41:55

4.Conclusion (failed to upgrade)

Cloning was successful. However, when I opened my laptop’s back cover, SSD was very small module. It was not normal size SSD.

There was empty HDD space additionally, but It needs additional parts to connect to the board and the bracket.

What I learned is that I can add SSD if I can order parts from HP.

HP laptop manual says the module’s part number is 700805-001.

Amazon sells it. Expensive!

by Hosung at November 29, 2014 08:45 PM

November 27, 2014

Gary Deng

Build Firefox Browser

I really want to build Chrome,but my laptop doesn’t have enough ram to build it. It took me 24 hours and failed. As the result, I decide to go for Firefox. Surprisingly,The Mozilla build instruction is much easier to follow. I have a Ubuntu 12.04 OS running on VMWare, Maybe I can build the Firefox browser on my virtual machine, unfortunately, I failed with a couple of errors and 75 warning messages. Finally, I have to try it on my Windows 8.1 OS, and it only took me an hour to build it.

Step one: Install build prerequisites (Visual Studio, mozilla-build)
Step two: Run start-shell-msvc2013.bat on my Windows Command Prompt

Even if you are on 64-bit Windows, do not use the start-shell-msvcNNNN-x64.bat files (unless you know what you’re doing). Those files are experimental and unsupported.

Step three: Get the source code use command

hg clone

Step four: build it and run it


If you want to save time and avoid unnecessary mistakes before diving into a big project, read the documentation carefully.

by garybbb at November 27, 2014 04:20 AM

Hosung Hwang


Issue : Android Webview Shell example crashes on Android 4.0.4

Today, I built the source code as a debug build.

1. Debug build

Release build : hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ time ninja -C out/Release android_webview_apk

Debug build : hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ time ninja -C out/Debug android_webview_apk

This seems weird because out/Release and out/Debug looks like setting folder. But document clearly says, “$ ninja -C out/Debug chrome. For a release build, replace out/Debug with out/Release”.

Debug build and run:

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ time ninja -C out/Debug android_webview_apk
ninja: Entering directory `out/Debug’
[4482/11953] ACTION Compiling media_java java sources
real 137m52.642s
user 517m22.844s
sys 24m53.369shosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/ –apk AndroidWebView.apk –apk_package –debug
hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/adb_run_android_webview_shell
Starting: Intent { act=android.intent.action.VIEW dat= }

It still crashed.

2. Try debug

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/adb_gdb_android_webview_shell –start
Starting: Intent { }
Extracting system libraries into: /tmp/hosung-adb-gdb-libs
Extracting system libraries into: /tmp/hosung-adb-gdb-libs
Pulling from device: /system/bin/linker
Pulling from device: /system/lib/egl/
Pulling from device: /system/lib/
Pulling from device: /system/lib/
Pulling from device: /system/lib/[…]Pulling device build.prop
130 KB/s (11056 bytes in 0.082s)
/system/bin/app_process.real: No such file or directory
GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <;
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type “show copying”
and “show warranty” for details.
This GDB was configured as “–host=x86_64-linux-gnu –target=arm-linux-android”.
For bug reporting instructions, please see:
Attaching and reading symbols, this may take a while..warning: Unable to find dynamic linker breakpoint function.
GDB will be unable to debug shared library initializers
and track explicitly loaded dynamic code.
0x400527bc in __ioctl () from /tmp/hosung-adb-gdb-libs/system/lib/

(gdb) bt
#0 0x400527bc in __ioctl () from /tmp/hosung-adb-gdb-libs/system/lib/
#1 0x4006df40 in ioctl () from /tmp/hosung-adb-gdb-libs/system/lib/
#2 0x4013face in android::IPCThreadState::talkWithDriver(bool) () from /tmp/hosung-adb-gdb-libs/system/lib/
#3 0x4013fe80 in android::IPCThreadState::waitForResponse(android::Parcel*, int*) ()
from /tmp/hosung-adb-gdb-libs/system/lib/
#4 0x401404e0 in android::IPCThreadState::transact(int, unsigned int, android::Parcel const&, android::Parcel*, unsigned int) ()
from /tmp/hosung-adb-gdb-libs/system/lib/
#5 0x4013d4d6 in android::BpBinder::transact(unsigned int, android::Parcel const&, android::Parcel*, unsigned int) ()
from /tmp/hosung-adb-gdb-libs/system/lib/
#6 0x401b0f32 in ?? () from /tmp/hosung-adb-gdb-libs/system/lib/
#7 0x40854df4 in dvmPlatformInvoke () from /tmp/hosung-adb-gdb-libs/system/lib/
#8 0x4088f2be in dvmCallJNIMethod(unsigned int const*, JValue*, Method const*, Thread*) ()
from /tmp/hosung-adb-gdb-libs/system/lib/
#9 0x40866c50 in dvmJitToInterpNoChain () from /tmp/hosung-adb-gdb-libs/system/lib/
#10 0x40866c50 in dvmJitToInterpNoChain () from /tmp/hosung-adb-gdb-libs/system/lib/
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

After error dialog poped up, gdb command was possible. And the call stack at that moment seemed not meaningful.

3. Try in the emulator

I started Android 4.4.2 emulator and tried to launch it there. The reason was because usually using eclipse I can see more meaningful messages.

In Android 4.4.2 emulator, the crash happened the same way. This was more serious. In this case, the fact that it worked well in my GalaxyS 4 Android 4,4,2 means nothing.

And the log says :

I/DEBUG(54): b6f39f6c 28006800 e02cd1e6 46294630 f00d4622
I/DEBUG(54): b6f39f7c 1c43e8f8 d11e4607 f9c4f001 29046801
W/ActivityManager(388): Process has crashed too many times: killing!
W/ActivityManager(388): Force finishing activity
D/Zygote(57): Process 1160 terminated by signal (6)

AwShellActivity is the test activity that uses this webview. It could be the test program’s problem. The path is ‘/src/android_webview/test/shell/src/org/chromium/android_webview/shell/’

4. Chrome Shell
I built and ran Chrome Shell in the 4.0.2 device.

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ ninja -C out/Release chrome_shell_apk
ninja: Entering directory `out/Release’
[1383/2469] ACTION Compiling chrome_java java sources
../chrome/android/java/src/org/chromium/chrome/browser/ warning: [deprecation] onError(String) in UtteranceProgressListener has been deprecated
public void onError(final String utteranceId) {
../chrome/android/java/src/org/chromium/chrome/browser/ warning: [deprecation] speak(String,int,HashMap<String,String>) in TextToSpeech has been deprecated
int result = mTextToSpeech.speak(text, TextToSpeech.QUEUE_FLUSH, params);
2 warnings
[2469/2469] STAMP obj/chrome/chrome_shell_apk.actions_rules_copies.stamphosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/ –apk ChromeShell.apk –release

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/adb_run_chrome_shell
Starting: Intent { act=android.intent.action.VIEW dat= }

Chrome Shell worked very well.

5. Content Shell
And I built and ran Content Shell in the 4.0.2 device.

hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ ninja -C out/Release content_shell_apk

ninja: Entering directory `out/Release’
[991/991] STAMP obj/content/content_shell_apk.actions_rules_copies.stamp
hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/ –apk ContentShell.apk –release
hosung@hosung-Spectre:~/cdot/ChromiumAndroid/src$ build/android/adb_run_content_shell
Starting: Intent { act=android.intent.action.VIEW dat= cmp=org.chromium.content_shell_apk/.ContentShellActivity }

Content Shell worked very well.

6. Conclusion 

Unfortunately, only WebView shell has problems. However, this is not the problem of rendering engine part. It is of the test program or some codes for WebView functionality. I hope the former.

Next step would be making test app using Chromium WebView. If it has no problem, problem is test program’s. If it has the same problem, I need to look at WebView code part. It will be tough.

Or may be I can port the WebView shell to eclipse for comfortable debugging.

by Hosung at November 27, 2014 01:00 AM