Planet CDOT

June 21, 2018

Andrew Smith

Asunder in Chinese

Sometimes I forget how many people open source software reaches. I was reading through my web server’s log analyser results and noticed a weird URL as a source of some traffic. Here’s a screenshot of what I found there:

I don’t know whether it’s chinese or japanese or some other language, I just think this is so cool.

I wrote the software, a volunteer translated it into another language, and eventually someone wrote a review/tutorial in that language, which will drive even more users to the software.

I love open source. And one of the most amazing things is that it works despite so many reasons why it shouldn’t.

by Andrew Smith at June 21, 2018 01:05 PM

June 19, 2018

David Humphrey

Building Large Code on Travis CI

This week I was doing an experiment to see if I could automate a build step in a project I'm working on, which requires binary resources to be included in a web app.

I'm building a custom Linux kernel and bundling it with a root filesystem in order to embed it in the browser. To do this, I'm using a dockerized Buildroot build environment (I'll write about the details of this in a follow-up post). On my various computers, this takes anywhere from 15-25 minutes. Since my buildroot/kernel configs won't change very often, I wondered if I could move this to Travis and automate it away from our workflow?

Travis has no problem using docker, and as long as you can fit your build into the alloted 50 minute build timeout window, it should work. Let's do this!

First attempt

In the simplest case, doing a build like this would be as simple as:

sudo: required
  - docker
  - docker build -t buildroot .
  - docker run --rm -v $PWD/build:/build buildroot
  # Deploy built binaries in /build along with other assets

This happily builds my docker buildroot image, and then starts the build within the container, logging everything as it goes. But once the log gets to 10,000 lines in length, Travis won't produce more output. You can still download the Raw Log as a file, so I wait a bit and then periodically download a snapshot of the log in order to check on the build's progress.

At a certain point the build is terminated: once the log file grows to 4M, Travis assumes that all the size is noise, for example, a command running in an infinite loop, and terminates the build with an error.

Second attempt

It's clear that I need to reduce the output of my build. This time I redirect build output to a log file, and then tell Travis to dump the tail-end of the log file in the case of a failed build. The after_failre and after_success build stage hooks are perfect for this.:

  - docker build -t buildroot . > build.log 2>&1
  - docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

  # dump the last 2000 lines of our build, and hope the error is in that!
  - tail --lines=2000 build.log

  # Log that the build worked, because we all need some good news
  - echo "Buildroot build succeeded, binary in ./build"

I'm pretty proud of this until it fails after 10 minutes of building with an error about Travis assuming the lack of log messages (which are all going to my build.log file) means my build has stalled and should be terminated. Turns out you must produce console output every 10 minutes to keep Travis builds alive.

Third attempt

Not only is this a common problem, Travis has a built-in solution in the form of travis_wait. Essentially, you can prefix your build command with travis_wait and it will tolerate there being no output for 20 minutes. Need more than 20, you can optionally pass it the number of minutes to wait before timing out. Let's try 30 minutes:

  - docker build -t buildroot . > build.log 2>&1
  - travis_wait 30 docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

This builds perfectly...for 10 minutes. Then it dies with a timeout due to there being no console output. Some more research reveals that travis_wait doesn't play nicely with processes that fork or exec.

Fourth attempt

Lots of people suggest variations on the same theme: run a command that spins and periodically prints something to stdout, and have it fork your build process:

  - docker build -t buildroot . > build.log 2>&1
  - while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
  - time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
  # Killing background sleep loop
  - kill %1

Here we log something at 5 minute intervals, while the build progresses in the background. When it's done, we kill the while loop. This works perfectly...until it hits the 50 minute barrier and gets killed by Traivs:

$ docker build -t buildroot . > build.log 2>&1
$ while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
$ time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
=====[ 495 seconds, buildroot still building... ]=====
=====[ 795 seconds, buildroot still building... ]=====
=====[ 1095 seconds, buildroot still building... ]=====
=====[ 1395 seconds, buildroot still building... ]=====
=====[ 1695 seconds, buildroot still building... ]=====
=====[ 1995 seconds, buildroot still building... ]=====
=====[ 2295 seconds, buildroot still building... ]=====
=====[ 2595 seconds, buildroot still building... ]=====
=====[ 2895 seconds, buildroot still building... ]=====
The job exceeded the maximum time limit for jobs, and has been terminated.

The build took over 48 minutes on the Travis builder, and combined with the time I'd already spent cloning, installing, etc. there isn't enough time to do what I'd hoped.

Part of me wonders whether I could hack something together that uses successive builds, Travis caches and move the build artifacts out of docker, such that I can do incremental builds and leverage ccache and the like. I'm sure someone has done it, and it's in a .travis.yml file in GitHub somewhere already. I leave this as an experiment for the reader.

I've got nothing but love for Travis and the incredible free service they offer open source projects. Every time I concoct some new use case, I find that they've added it or supported it all along. The Travis docs are incredible, and well worth your time if you want to push the service in interesting directions.

In this case I've hit a wall and will go another way. But I learned a bunch and in case it will help someone else, I leave it here for your CI needs.

by David Humphrey at June 19, 2018 02:45 PM

June 18, 2018

Arsalan Khalid

This worked for me, thanks for putting a post out there for it!

This worked for me, thanks for putting a post out there for it!
What does this addition even do exactly?

by Arsalan Khalid at June 18, 2018 11:29 AM

May 20, 2018

Michael Kavidas


For my final project in SPO600 I was tasked with doing optimization in an open source project. My project choice was FFMPEG. This blog post will outline my journey and what I’ve taken away from it so far.

Step 1 – Finding what to optimize:

FFMPEG is a massive project so if I was going to optimize it I would have to narrow down a function that could use optimization. Rather than go it alone, I reached out to the community for some advice. At first what I was looking for was functions already optimized for X86_64 that I could port over to AArch64. A helpful member pointed me to a file that deals with decoding OPUS and some AAC samples. This file has a version already optimized for X86_64 assembly.

Step 2 – Making sense of the code/ narrowing down a function to optimize

While sifting through the assembly code I had a hard time understanding what was going on. My professor suggested I focus on the C code and work from there. The file has roughly 400 lines of code and has 9 functions so my next step was to find out where I should focus my optimization. FFMPEG has a built in timer function to help facilitate benchmarking the timer estimates the cycles that a given block of code takes to run (more useful than just time and less variance between runs). I used this function to benchmark the functions that I felt I could optimize. Eventually I narrowed my focus to this function: origFunction

As you can see the function in question does some simple arithmetic on floating point values. When benchmarked I can see that the function gets hit fairly heavily during decoding:


Step 3 – Writing my optimization

My idea for optimizing this function was to use SIMD instructions to do the arithmetic in parallel. I chose to write my optimization with NEON intrinsics because it would provide easier readability and would takes less lines of code to get the same job done. Here is the resulting code:


Step 4 – Further benchmarking and test

When benchmarking my optimization I was disappointed to see that it was a lot slower than the original.


When looking into the disassembly of this block of code I can see that the loading and storing of values is taking up much more time than the arithmetic:


The original disassembly:



I believe that the function I chose cannot be optimized this way because the operations needed to load the values in the vector registers take up more cycles than the operations saved by doing the arithmetic in parallel. Overall I learned a lot about how CPUs work and the many ways a program can be optimized. As a programmer this course and assignment have really taught me to think about the code I am writing and how the compiler will interpret it into machine code.

by mkavidas at May 20, 2018 09:39 PM

May 19, 2018

Mat Babol

DPS911 - Release 1

About two weeks ago, I started a DPS911 Open Source Projects class, which is essentially a continuation of the previous DPS909 Topics in Open Source Development class that I took last year. The class size is much smaller, we have 4 students compared to the 30+ we had last time, which is nice the professor can spend more time with us as opposed to the entire 30+ class of students.

For this class, we are starting a project that my professor has envisioned, for now we are calling it unbundled. This project is meant to recreate an operating system for web development in a browser environment. The idea is to have features such as accessing directory of files, a code editor, command line terminal, sharing files, and more, be available in the browser for use on any operating system. The project isn't re-inventing the wheel, the technology is already there, we are just putting everything together. Brackets, for example, will be used for the code editor, while webtorrent will be used for file sharing.


For my first release, I took on issue #12, which was to create the docusaurus for the project. Docusaurus is a tool developed by Facebook to make it easy for teams to publish documentation websites without having to worry about the infrastructure and design details. The site contents are written in simple markdown code, and docusaurus generates a high quality website.

I created the first version of the files and put a pull request in. The PR did not initially get accepted. There was a few bugs, the color theme that I chose wasn't the best, and my some of my instructions weren't clear. I fixed all the problems and create a new PR which was then accepted.

I've learned a lot about git that I previously didn't know. I'm still relatively new to Git, so things like rebasing or pulling from upstream were all new concepts to me. I already feel like my Git skills are expanding.

The docusaurus site looks and feels much better after the improvements that I made. I changed the theme entirely, named the files correctly, and fixed a few other minor changes.

The site works correctly locally, however when up on gh-pages, there is a few resources missing. A few images and the main.css files cannot be reached. After a quick look, the files themselves are not missing, so it seems to be a linked error. I'll look into fixing this issue then I'll create a new PR.

New release

For my next release, I'll be working on sharing files using webtorrent. WebTorrent is a streaming torrent client for node.js and the browser. I've briefly looked into this and got parts of it working, however this week I will dive deeper into this. Stay tuned for my progress.

by Mat Babol ( at May 19, 2018 09:25 PM

May 15, 2018

Fateh Sandhu

ServiceWorkers and xterm.js integration


ServiceWorkers are javascript scripts that run on the local machines locally and it communicates with the webpage using postMessage. They run in the background after they have been registered and the browser has installed them on the machine. They can route the network request made by the page. Since they allow you to control the network communications, they can be helpful in making the page even more customized to the requirements of that particular web app. For example, if you want to make an app that can be run offline or needs to be able to run smoothly regardless of the network quality service workers can use the data stored locally in caches to speed up the loading process for data.

How they work

First we setup a basic html page that has the link to the javascript script that will register and install the service worker using a promise.

Screen Shot 2018-05-14 at 8.57.19 PM.png

Then to install the service worker you register it with the file as the parameter.Screen Shot 2018-05-14 at 8.56.53 PM.png

Once it has installed you can check to make sure that the appropriate page and request is being addressed. All of the files that will be cached and the service worker should be installed successfully. We make sure that the request received is valid and then respond with the correct response.


Screen Shot 2018-05-14 at 8.57.08 PM.png



Xterm is a typescript written library that enables apps to run a terminal independently. We will be leveraging this library to provide a fully functional terminal in our web app. Xterm passes the events to the backend and provides the response on the screen.

by firefoxmacblog at May 15, 2018 01:25 AM

May 11, 2018

Michael Kavidas

OSD600 My First PR, VSCODE Hacking

For my second part of my final project I was tasked with fixing a bug in an open source project. I decided to work in VSCode because of its large community and it seems very open to newbie developers.

Finding a Bug:

Finding a bug was pretty straight forward using the issue tracker on Github. At first I tried to fix a bug that was much too complicated for my first pull request. After looking again I found a feature request that would be easy for me to do and would provide me some experience with the workflow/ a deeper understanding on how VSCode works.

The Feature:

The feature I was assigned was to add the option to have the active border tab border be positioned at the top of the tab instead of the bottom (the default position). This would allow users/devs more flexibility in making custom themes.

Figuring out the way it works/ Debugging:

The first thing I did was ask for some guidance when I asked to be assigned to the bug. This allowed me to get a helpful jumping off point and narrowed down my search. I was pointed to a file where the color for the border is created. This file is used by themes to set the color scheme of VSCode. I then changed the default color so I could observe the change it has in the editor. After this I still needed to find out where the code that sets the position of the border and how it works. To figure this out I did two things: I first started the debugger and inspected the tab bar, I then searched the code to find where the “TAB_ACTIVE_BORDER” presets were being used. When using the debugger I found out that the border was a boxShadow. Searching the code I found the file that creates the aforementioned boxShadow and sets its position.

Adding the Feature:

To add the feature I:

  1. Added new color definitions for the top of the tab and called them “TAB_ACTIVE_BORDER_TOP” and “TAB_UNFOCUSED_ACTIVE_BORDER_TOP
  2. I added some logic that checks which position is defined and to change the position/color depending on which one is defined.

This allows theme developers to choose to either include a top border or bottom border for their TABS. I then submitted my PR and got a response asking me to make some changes. After making the necessary changes my code was accepted.


This project has been the first time I have contributed to a large code base. Usually in school you work on code written by yourself, giving you a full understanding of the code. Working on a large project like VSCode can be intimidating because of its unfamiliarity and how large the code base is. That being said, this project taught me that although contributing may seem intimidating at first, once you dive in and start playing around it can be surprisingly easy to get involved and if you ever get stuck there is a community of people willing to help. In addition to this, the fact that the couple lines of code that I wrote will be run on millions of computers is a profound realization. In summation, this has been a very rewarding journey so far and I am excited to continue.

by mkavidas at May 11, 2018 09:38 PM

Ray Gervais

Closing Two Weeks Completed of the 100 Days of Code Challenge

After The First Week Was Completed

Forest with Road Down Middle

Wow, how quickly two weeks are passing by while you’re busy enjoying every hour you can with code, technology, people, and for once, the weather. I’m even more surprised to see that I was able to maintain a small git commit streak (10 days, which was cut yesterday, more on that below) which is damn incredible considering that I spent 90% of my time outside of work away from a keyboard. I told myself that I would try my hardest to still learn and implement what I could while travelling, opting to go deep into the documentation (which I will include from what I can put from the various Git commits and search history below) and learning what it means to write Pythonic code. Still, progress and lines of code is better than none whatsoever. One helpful fact which made learning easier was my dedication to only learning Python 3.6, which removes a lot of 2.1 related spec and documentation. This allowed me to maintain an easier to target breadth of documents and information while travelling.

Jumping into Different Lanes

More so, I found myself trapped in an interesting predicament which I put myself in for the first week. Not knowing where to start, or how much time online challenges would take in the later hours, I opted to decide just as I walked toward the keyboard ‘What am I building today?’. This means that everyday of the challenge, I’ve walked in on a blank canvas thinking ‘Do I want to play with an API, learn how to read the file system? etc.’ This has been a zig-zag way of exposing myself to various scopes and processes which Python is capable of. I love the challenge, but I also fear the direction would lead me towards a rocky foundation of niche exercises, pick-and-choose projects and an understanding limited in scope. Learning how to to make API requests with the Requests module was a great introduction to PIP, pipenv, and 3rd party modules. Likewise dictating the scope of what I want to learn that day made each challenge a great mix of new, old, and reinforcing of a different scope compared to yesterday.

For the second week, I wanted to try some coding challenges found online such as HackerRanks (Thanks Margaryta for sharing), FreeCodeAcademy’s Front-End, Back-End, and Data Science courses, and SoloLearn challenges on mobile. Curious of the output and differences between my previous and current week’s goals, I came to the following thoughts after becoming a 3 star Python Developer on Hacker Rank (an hour or so per day this week’s worth):

  • Preset Challenges are better thought out, designed to target specific scopes instead of a hodge-podge concept.
  • You can rate them based on difficulty, meaning that you’re able to gauge and understand your current standing with a language.
  • It’s fun to take someones challenge, and see how you’d accomplish it. There’s many times where I saw solutions posted on forums (after researching how to do N) which I thought I’d never had brainstormed, were too verbose, were well beyond my understanding, or too simple or stagnated where the logic could be summed up in a cleaner chained solution.

Experience So Far

Whereas I fretted and stressed over time and deadlines, this challenge’s culture advocates for progress over completion. I still opt for completion, but knowing that code is code, instead of grades being grades is a relieving change of pace which also makes the approach and implementation much more fun. I’ve opted for the weekends to be slightly more relaxed, not heavily focused on code and more and concept and ideals (perhaps due to my constant traveling?), which also makes my weekday related challenges fantastic stepping stones which play with the weekend’s research.

Learning Python has never been an item high up on my priorities, and only through David Humphrey’s persuasion did I add it to the top of my list -knowing that it would benefit quite a bit of my workflow in the future-, and opt to learn it at the start of the challenge. From the perspective of someone who’s background in the past two years revolved around CSS, JS, and Java, Python is a beautifully simple and fun language to learn.

Simple yet powerful, minimalistic yet full-featured. I love the paradox and contradictions which are produced simply by describing it alone. The syntax reminds me quite a bit of newer Swift syntax, which also makes the relation easier to memorize. I also gather that from an outsider’s perspective, that the challenge also shows growth in the developer (regardless of how they opt to do the challenge) through the body and quality of work they produce throughout the span of the marathon.

An interesting tidbit, is that I’ve noticed my typical note taking fashion is very Pythonic in formatting / styling, and you can ask my peers / friends who’ve seen my notes. It’s been like this since High school with only subtle changes throughout the years. Coincidence? Have I found the language which resonates with my inner processes? In all seriousness I just found it hilarious how often I’d start to write python syntax in Markdown files, or even Ruby files yet, when writing my own notes the distinction was minimal.

What About The Commit Streak?

Forest with Road Down Middle

Honestly, the perfectionist in me; one quick to challenge itself where possible was the most anxious about losing the streak, especially since as a developer it seemed to me as one way to boast and measure your value. I enjoyed maintaining the streak, but I also had to be honest with my current priorities and time to myself. Quite frankly, it’s not healthy to lose an hour sleep to produce a measure of code you can check in just for a green square when you’ve already spent a good few hours reading Bytes of Python on the subway for example, or devoted time to learning more through YouTube tutorials on your lunch break. I thought that I’d use GitHub and commits as a way of keeping honest with myself and my peers, but after reading quite a few different experiences and post-200 days types of blogs, I’m starting to see why most advocate for Twitter as their logging platform. Green squares are beautiful, but they are only so tangible.

Whereas I can promise that I learned something while traveling, perhaps using SoloLearn to complete challenges, I cannot easily port over this experience and visual results to Git to validate progress. I suppose that is where Twitter was accepted as the standard, since it’s community is vastly more accessible and also accepting that not everything is quantifiable through Python files. Instead, saying that you read this, did that, learned this, and experimented with that is as equally accepted as with it’s 100+ line count.

This doesn’t mean that I’m going to stop commiting to GitHub for the challenge, or that I’ll stop trying to maintain a commit streak either; it simply means that I can accept it being broken by a day where I cannot be at my computer within reasonable time. It won’t bother me to have a gap between the squares once in a while.

I’ve seen friends enjoying the challenge for similar and vastly differences too, and I highly recommend giving it a try for those who are still hesitant.

by RayGervais at May 11, 2018 09:34 PM

May 01, 2018

Henrique Coelho

Continuous Integration with TypeScript + Mocha + Istanbul (NYC) + CircleCI

Writing unit and integration tests is the bane of my existence. The sheer amount of boredom produced by this practice would easily make me rich if I somehow were paid to get bored. I would love to meet someone who genuinely enjoys writing tests for a hobby so I would allow them to write all my tests for free, although my self-preservation instinct tells me that such person cannot be trusted and will eventually try to stab me with a fish or some unusual object that will make people chuckle when they read the news.

Anyway, writing unit tests is torture, but it has to be done. Other things that should be done, on top of writing unit tests is:

  1. Check the coverage of these tests to make sure you did not miss any lines, branches, functions, files, etc
  2. Continuously test the code pushed into a repository with a continuous integration system. This way, we can easily know if the tests are broken for a pull request

This post will be about joining TypeScript (programming language) with Mocha (test framework), Istanbul (code coverage), and CircleCI (continuous integration).

I created a simple TypeScript project with the following structure (the files are all empty for now, except for package.json, which contains the initial code from npm):

|-- .circleci
|   |-- config.yml
|-- dist
|-- package.json
|-- src
|   |-- print.ts
|   `-- transform.ts
|-- test
|   |-- mocha.opts
|   `-- unit
|       `-- transform.test.ts
`-- tsconfig.json

First, I made a tsconfig.json to configure how TypeScript will be compiled:

  "compilerOptions": {
    "module": "commonjs",
    "removeComments": false,
    "sourceMap": true,
    "baseUrl": "types",
    "typeRoots": ["node_modules/@types"],
    "target": "es6",
    "lib": ["es2016", "dom"],
    "rootDir": "src",
    "outDir": "dist",
    "types": [
  "include": [

The "removeComments": false is very important. We will see why later!

I also made a little script to compile the TypeScript code in the package.json file:

"compile": "./node_modules/.bin/tsc"

Let's start with print.ts and transform.ts:

// print.ts
// This is just a dummy function. We won't do anything interesting with it
export function print(v: any) {
// transform.ts
// This extremely over-complicated function will receive an array of numbers
// and return 0 if the sum of the numbers is 0, 1 if the sum is > 0, and -1
// if the sum is -1
// I made it complicated so we will have lots of branches to test
export function transform(input: number[]): number {
    if (!input || input.constructor !== Array)
        throw new Error('Input must be an array of numbers!');

    try {
        const total = input.reduce((acc: number, n: number) => acc + n, 0);

        if (total === 0) {
            console.log('The input is equal to zero');
            return 0;
        } else if (total > 0) {
            console.log('The input is greater than zero');
            return 1;
        } else {
            console.log('The input is greater than zero');
            return -1;
    } catch (e) {
        console.error(`Unknown error occurred: ${e}`);
        return 0;

Alright. We have the code, now we need to make unit tests for it!

First, I will install the following packages:

  • chai - Has useful tools that will make asserting the results easier
  • mocha - Our test framework
  • @types/chai - TypeScript types for the chai module
  • @types/mocha - TypeScript types for the mocha module

And now I am going to write the test cases for transform.ts:

import { transform } from '../../src/transform';
import { expect } from 'chai';

describe('transform', () => {

    it('should fail if non-array is passed', () => {
        expect(() => transform('Bad input!' as any)).to.throw();

    it('should return 0', () => {
        const result = transform([1, -1, 2, -2]);

    it('should return 1', () => {
        const result = transform([1, -1, 2, -2, 3]);

    it('should return -1', () => {
        const result = transform([1, -1, 2, -2, -3]);


Perfect! We have the test cases done.

Now, here is one problem: should we compile the tests? They are written in TypeScript, so they should be compiled, right? Well, you don't have to. Luckily, ts-node is here to help! Ts-node is a TypeScript interpreter! Although I would not recommend actually using it to run the main script, it is great for running the test cases!

First, installing the packages we need:

  • source-map-support
  • typescript
  • ts-node

Now let's configure mocha to use ts-node:

# test/mocha.opts
--require ./node_modules/ts-node/register
--require ./node_modules/source-map-support/register

Here is what these lines mean:

  • require ./node_modules/ts-node/register - Here we are telling Mocha to use ts-node as the interpreter
  • require ./node_modules/source-map-support/register - Support for source maps. Will be useful later with Istanbul
  • --recursive - Test all the files in the directory, not individual files
  • --exit - Force exit after the tests are done (will kill any pending promises)

And let's make an NPM script to run the tests (files that end in .test.ts) in package.json:

  "scripts": {
    "test": "./node_modules/.bin/mocha test/**/*.test.ts",
    "compile": "./node_modules/.bin/tsc"

That's it. Whenever we run npm test, mocha will run all the tests for us. Let's try it:

    ✓ should fail if non-array is passed
The input is equal to zero
    ✓ should return 0
The input is greater than zero
    ✓ should return 1
The input is greater than zero
    ✓ should return -1

  4 passing (7ms)

But that's not all! Writing tests is not torture enough - we need to make sure we write enough tests to cover all our code. This is what code coverage does.

Istanbul (also known as NYC - I actually don't get why the two names) will make this very easy. I will install the following packages:

  • nyc

Easy. Now we can modify the test script so Istanbul will check our code coverage:

  "scripts": {
    "test": "./node_modules/.bin/nyc ./node_modules/.bin/mocha test/**/*.test.ts",
    "coverage": "./node_modules/.bin/nyc report",
    "compile": "./node_modules/.bin/tsc"

Whenever we run the tests, we will get the coverage for our files. I also added a separate script (coverage) for when we just want to see the coverage, without running the tests again.

I will also add some settings for Istanbul in the package.json file:

  "nyc": {
    "extension": [ // <- Extensions to be covered
    "include": [ // <- Which directories should be covered?
    "reporter": [ // <- Reporters used *1
    "all": true, // <- Check all files? *2
    "check-coverage": true, // <- Enforce a coverage threshold?
    "statements": 90, // <- Minimum coverage for statements (%)
    "functions": 90, // <- Minimum coverage for functions (%)
    "branches": 90, // <- Minimum coverage for branches (%)
    "lines": 90 // <- Minimum coverage for lines (%)
  1. Reporters are how the coverage is reported to us. In this case, I am asking for two types of reports: text in the terminal, and html files (useful for CircleCI)
  2. If all is set to false, it will only check the coverage of the files used by the test files. If you have a file that was not tested at all, it will not show up in the reports.

Let's take a look at the output of npm test:

ERROR: Coverage for lines (81.25%) does not meet global threshold (90%)
ERROR: Coverage for functions (66.67%) does not meet global threshold (90%)
ERROR: Coverage for statements (82.35%) does not meet global threshold (90%)
File          |  % Stmts | % Branch |  % Funcs |  % Lines | Uncovered Line #s |
All files     |    82.35 |      100 |    66.67 |    81.25 |                   |
 print.ts     |        0 |      100 |        0 |        0 |                 2 |
 transform.ts |     87.5 |      100 |      100 |    86.67 |             19,20 |

Cool! But there is a problem there: we still haven't fully tested transform.ts:

    } catch (e) {
        console.error(`Unknown error occurred: ${e}`);
        return 0;

I put that catch there as an example of something I can't really test. Nothing will throw an error there, but sometimes we are using something that can fail under circumstances out of our control, and they are failures that we can not reproduce.

What can we do then? We can tell Istanbul to ignore lines, like this:

    } catch (e) {
        /* istanbul ignore next */
        console.error(`Unknown error occurred: ${e}`);
        /* istanbul ignore next */
        return 0;

This will only work if "removeComments": false is set in tsconfig.json, otherwise the compiler will remove the comment.

Let's try it now:

ERROR: Coverage for lines (85.71%) does not meet global threshold (90%)
ERROR: Coverage for functions (50%) does not meet global threshold (90%)
ERROR: Coverage for statements (85.71%) does not meet global threshold (90%)
File          |  % Stmts | % Branch |  % Funcs |  % Lines | Uncovered Line #s |
All files     |    85.71 |      100 |       50 |    85.71 |                   |
 print.ts     |        0 |      100 |        0 |        0 |                 2 |
 transform.ts |      100 |      100 |      100 |      100 |                   |


I won't bother making the test case for print.ts because that file was there only to show you what "all": true does: even if we are not testing that file, it will show up in the coverage report! Let's just jump into integration with CircleCI.

CircleCI is very easy to set up. Most of the times, continuous integration systems have their own separate environment (such as a container), which bad news for people who can't get their code running even on their own machine. CircleCI is no exception. All we need to do is describe how the environment should be and how to run our tests (find more information here).

Here is my ./circleci/config.yml that describes how to run my tests:

version: 2
    working_directory: ~/app
      - image: circleci/node:10.0.0
      - checkout

      - run:
          name: Installing packages
          command: npm install

      - save_cache:
          key: dependency-cache-{{ checksum "package.json" }}
            - ./node_modules

      - run:
          name: Running tests
          command: npm test

      - store_artifacts:
          path: coverage
          prefix: coverage

In this case, I am asking for a container with Node 10.0. Then I follow these steps:

  1. Install my npm packages
  2. Cache my npm packages (will make the jobs a lot faster)
  3. Run the tests
  4. Save the html files with the coverage (remember the html reporter?) as an artifact, which we can access after the tests are done

As long as our project is set up on CircleCI, it will test anything we push into our repository.

All done!

Repository with the code

by Henrique at May 01, 2018 01:15 AM

April 30, 2018

Ray Gervais

An Introduction to The 100 Days of Code

The day has finally come, the start of the much discussed 100 days of code! The official website can be found here:, which explains the methodologies and why(s) of the challenge. I decided that it would be the best way to start learning new languages and concepts that I’ve always wanted to have experience in, such as Python, Swift, Rust, and GoLang. The first and primary scope is to learn Python, and have a comfort with the language similar to how I do with C and C++.

Expectations & Challenges

I’m not nervous at all with the idea of learning Python, but I’m concerned with being able to do an hour of personal programming daily at a consistent rate. Being realistic, right now I still spend three hours commuting on bus and trains, crowed to the degree where it’s not viable to even program on a Tablet or Netbook. These coding hours I imagine will be affiliated with the later hours, since I am no morning person.

I also expect to become rather well acquainted with Python 3 within a week or few, and have begun thinking of ways to further my development with the language by using or contributing to python projects such as Django, Home-Assistant, Pelican, and Beets for example. This will vary or expand as we get further into the process.

Once content, I want to move to Swift and relearn what I had previous did in the Seneca iOS Course, attempting to further my understanding and building applications in the same time. I think the end result being a iOS application with a Python back end would be a beautiful ending, don’t you agree? We’ll see.

Here We Go

I cannot say that I will blog everyday for the challenge, but instead will try my hardest to keep those interested through my twitter handle @GervaisRay. Furthermore, you can keep track of my progress here where I’ll attempt to update the week’s README with relevant context and thoughts.

This will be fun, and I can’t wait to see how I, and my peers do throughout the challenge.

by RayGervais at April 30, 2018 11:55 PM

April 29, 2018

Aleksey Glazkov

DPS909 – Lab 3

For this lab I decided to work on issue #42720, “Color picker: no longer appears in settings editor”. It is not very serious bug, however behavior of this color picker is not user-friendly. After using tons of program, I can definitely say, that if I want to extend color picker I need to hover on that small red square, but in VSCode it only works when you hover on the text.


Just a simple search in VSCode source files led me to the file that handles color picker’s behavior – ‘colorPickerWidget.ts’. There are classes ColorPickerHeader that renders that small square with selected color and ColorPickerBody that renders color picker itself.

With the help of debugger I found a couple of listeners, however they were set to listen for clicks on the label, but color picker is shows when label is hovered. My guess is that this line of code registres listener that I’m looking for:

this._register(model.onDidChangePresentation(this.onDidChangePresentation, this));



Right now I’m not able to fix this bug, however after looking through the source code I got a basic idea of what is going on there. Every time I hover over that label, signal emits that is accepted by listeners and color picker is displayed.

by alexglazkov at April 29, 2018 03:47 AM

DPS909 – Lab 6

This blog post will cover how different browsers handle input provided to address bar.

Brave in comparison to other browsers, such as Chrome and Firefox, couldn’t handle links containing spaces, e.g. “ cat”. It could trim links, but always left all the white spaces inside.

  • Brave couldn’t open files with whitespaces in path.
  • Chrome can open files with white spaces in path and replaces them with ‘%20’.
  • Firefox also can open files with whitespaces in path but it does not replace them with ‘%20’. They handle URLs similarly.

Writing tests for Brave is not very difficult. I wrote some tests for getUrlFromInput functions. What I had to do is to provide some ill-formatted input and check if its equal to expcted result. Here is one example:

'calls url with leading and trailing whitespaces': (test) => {

test.equal(urlUtil().getUrlFromInput(' cat '), '')


What is similar in browsers implementation?

All of the browsers use a set of functions to prepare input before final validation. Brave relies on regular expressions mostly, while Chromium analyzes input step by step in different functions. Looks like Mozilla uses somewhat mixed approach.

In Brave, to cover edge cases provided to us, all I had to do is to replace all whitespaces with ‘%20’ symbols as Brave can handle links and paths with ‘%20’ well enough.

Doing this lab, I learned how Brave, Chrome and Firefox handle input from the address bar, how they parse it and decide what to do next with it.


by alexglazkov at April 29, 2018 02:32 AM

April 28, 2018

Aleksey Glazkov

DPS909 – Release 0.3

Hi there!

In this blogpost I will tell you about my experience working on Release 0.3 for my Open Source Development course.

This release was really challenging for me. I tried to fix different bugs in VSCode and Brave.


First of all I started working on issue #48103, “Saving workspace names with dot (.) removes the last dot”. However, I quickly faced a problem that none of the debuggers stopped at breakpoints placed in file named workspacesMainServices.ts where all the code handling workspace saving is located.


Unverified breakpoint, Breakpoint ignored because generated code not found(source map problem)”. The error tells me that there is a problem with “source map”, however I tried enabling it in launch configuration file and tweaked some other settings. After some time researching this error on the web I tried a bunch of solutions, but nothing helped. I decided to move on.

Next issue I picked up was #48875 “Cmd+Click URL Containing Comma in Integrated Terminal doesn’t follow full URL”.


That sounds interesting. I quickly found files that handle links in terminal and started digging in. I figured out that VSCode uses regular expressions to find links within text. There is definitely problem with regular expression. I tested in on 3rd party resources and it didn’t work as expected.

test_vscode_regexpVSCode regular expression
3rd_party_regexpAnother regular expression that I found on the web

Time to research again. This time I found out that the issue I was working on is actually a duplicate of already existing issue and the problem was not with VSCode but with one of the dependencies it uses, in this case xterm.js and some contributors are already working on that issue. The good thing is(well… good thing for me) that I found another bug, while I was working on getting links with commas work properly in terminal.


As you can see on this screenshot, the link’s tooltip is partially cut off when link is located on 1st or 2nd lines of the terminal. I have never submitted any issues before and decided that it would be very useful experience for me. I looked through issues and couldn’t find anything similar. Actually, I found one closed issue with merged pull request, but it seemed that it was not fixed. I filled my first issue ever. I discovered a reporting tool integrated in VSCode, that helps generate issues. As soon as issue was created, again, I started digging in. After playing a bit with breakpoints I found piece of code that handles rendering of that tooltip. I played around with css properties and I guessed a perfect value that displayed the tooltip properly(later I confirmed it in one of the css files). Basically, I had to found the height of the tooltip box and always set CSS bottom property not more than container height – tooltip box height.

tooltip fixedFixed tooltip


Despite the fact that I have not submitted much of code for this release, I really got a lot out of it. My growth goal was to learn how different services work inside VSCode and I decided to do it by fixing different small bugs. At the end, I fixed only one, but the experience I got while working on others is very valuable.



Just a quick overview of my experince with Brave.

My first experience with working on Brave started with issue #8635, “All preferences options (left pane) should be accessible on a small window”. It was well discussed there and I saw some solutions there, however there was no pull requests, so I decided to give it a shot.


When you view preferences in small window you can’t see some of the options.

brave_okBrave in a big window
brave_bugBrave in a small window. Some options are inaccessible


Added some css properties, got the following.

brave_fixedBrave in a small window “fixed”

Not the best result apparently.

by alexglazkov at April 28, 2018 11:11 PM

April 25, 2018

Aliaksandr Ushakou

Dark Mode Feature Request

My task for today is implementing the dark mode feature for an open source project called bridge-troll. All information about this feature request can be found here. In summary, the user interface is very bright.


This UI looks nice and comfortable during the daytime; however, after sundown, this white palette does not fit under the night entourage, and the eyes start to get tired.

The essence of this feature request is not just to change the color scheme, but to make the UI color theme automatically change depending on the time of the day.

An automatic color theme switch is not so difficult to implement. But, there is one obstacle. Thanks to globalization and the Internet (hmm.. the impact of the Internet on globalization is tremendous), any software can be accessed from all around the world.

An interesting fact:
In countries with strict censorship, where websites and apps are blocked indiscriminately, the digital literacy is growing. People have to learn how to bypass prohibitions and set up a VPN. The recent blocking of the Telegram Messenger in Russia is a very good example of it. Russian government tried to block Telegram, but failed due to a number of reasons. People began to spread information about how to bypass the blocking using the VPN. This incident increased not only the digital literacy of people, but also the popularity of the Telegram due to the word-of-mouth effect.

Ok, back to the topic. This web app can be accessed from all around the world, so the color mode should switch at the right time regardless of the user location. There are tons of open source JavaScript libraries for all kinds of occasions. The library that I’ve used is SunCalc. SunCalc is a JavaScript library for calculating sun position, sunlight phases, moon position, and more.

The next step is to find a map in dark colors. Fortunately, this project uses Leaflet. Leaflet is the leading open-source JavaScript library for mobile-friendly interactive maps. Leaflet has a huge selection of different maps for every taste.

Just look at this Mordor like map. Awesome!


The map that I chose for the dark mode looks like this:


Implementation of the automatic color mode switch took some time. The hardest part was to switch all the icons. The final result looks like this:


That’s all! Thanks for reading and have a nice one!

by aushakou at April 25, 2018 03:16 AM

April 24, 2018

Arsalan Khalid

What does it mean to support? An open source initiative 0.4

I originally set out to offer a lot more code contributions to the Brave browser for my efforts of the open source course I’m finishing right now. However, I learned the hard way, that this isn’t as easy to do as one would expect. It is indeed easy to get involved with a project, be a contributor, and support the initiative, but it is even more challenging to support the project in an integral way. This has been a humbling journey, where I’ve learned not to assume that my technical mind can do something, but instead — I need to put in the work, and the consistent set of development time to become better. Believe me, this is a hard thing to even admit to myself, because I know I have skills in areas that are unmatched at my level, but I know I have skills that could be a lot stronger in my development background. So you just have to keep moving forward, learn from the mistakes, the failures, and most importantly the critical criticism. Which at times can be tough, because people don’t mean to take an aim at you personally, but just that you aren’t delivering to the standard that is expected of you. It’s a tough pill to swallow, but like Neo — you have to just do it, and come to your own.

For this PR, I wanted to identify some contributions which aren’t in the form of code, but with documentation, and general project pm support. In doing so, I’ve helped clear up a few issues, or assisted in causing awareness towards the issue. This can be found to not only be supportive to the development community supporting the project or the issue, but a great way to immerse yourself within the community of the project.

First, I started off fairly basic (as always) to contribute on the validity of the issue.

Maybe Bug: Context searches in private tabs uses default search engine instead of DuckDuckGo · Issue #12639 · brave/browser-laptop

Although the core team of the project got back to me with:

That clarification certainly helps, for developers who want to take the task on, but as well as my self who may choose to dabble further in this issue.

Duckduckgo as default search engine (instead of google) · Issue #9748 · brave/browser-laptop

This next issue above alludes to a similar premise of the issue from before (pardon the thumbnail of the issue link). If you look closely at the thread within this issue, you’ll notice a cohesive debate and discussion around the use of Duck Duck Go within the brave browser, and making the defacto default search engine DDG. Which stems an interesting set of questions, why is google set as the default in the first place, one contributor mentions that this goes against the privacy focus Brave so heavily markets:

I personally don’t have a stance on this subject, but it’s interesting to see the range of opinions on this matter, and the dislike towards what many would deem as the status quo of search. The truth is already out there though, many engineers and now people know about the ever looming privacy issues with the likes of Google, Facebook, AirBnB and all the big San Fran giants. Mozilla, also slightly falls in this mix, but keeps an alleged imperative stance on user’s data, and the sharing of it. Nonetheless, I personally find it a bit fishy that they’re a near billion dollar revenue driven company (Mozilla), whom core service is the browser. Where’s all this money coming from, apparently from key ‘search deals’, they have with various ISPs, Search Engines, and much more.

The context of Mozilla and their revenue objectives does bring forth some curiosity towards how this model fits within Brave, as the captain of the ship is the esteemed Brendan Eicht after all.

Moving on from that side note, I kept moving through small tidbits I could support on, such as:

Bookmark search · Issue #13172 · brave/browser-laptop

I offered some insight in their thread to draw the connect towards a commit done as it relates to this, but is slightly different:

Lets try investigating what they did, look more closely at their single commit:

Search bookmarks as soon as characters are typed by MKuenzi · Pull Request #4097 · brave/browser-laptop

Already implies not being too intensive of a problem, makes us get into debugging react components on the live! That’s cool, haven’t done that before — lets give it a shot:

First try at running the debugger, things can never be so straight forward:
Debugger listening on port 5858.

Warning: This is an experimental feature and could change at any time.
Crash reporting enabled
Unhandled promise rejection in the main process OpenError: IO error: lock /Users/arsalan.khalid/Library/Application Support/brave/ledger-rulesV2.leveldb/LOCK: Resource temporarily unavailable

I basically dropped this after finding the debugger to be a bit of a pain, I know it isn’t the greatest, but I need to keep moving — I think I’ve offered a small gland of support here.

I then closed off this release by looking at a few relevant triage issues:

And this was the most interesting as it related to one of my other bugs and tests I’ve been looking at:

Seems like the above issue is only testable in master on Windows at the moment. I’ll wrap up here, hopefully this is mildly amusing :)

Thanks for tuning in, these were a bit of my musings through finding areas to mildly contribute towards the community and on-going tasks on the Brave browser project.



by Arsalan Khalid at April 24, 2018 06:09 PM

Testing a browser? Being brave enough to do it. Release 0.5. Half way! damn. So what are we testing exactly? We can also find that sometimes our contributions don’t make it all the way, like with this:

Now I’m looking at testing the browser as a contributor, so I had one of their contributors create a test task for me:

I was working on a feature related to being able to switch profiles easily, as Lauren mentions that switching builds uses your default prod profile:

She says: “Do know, that if you download and run this build, it will use your normal ‘brave’ production profile, so if you don’t want to do that, I suggest you rename your prod profile to be something else while testing (brave-prod is a good choice that will keep it from getting over written”.

Pretty cool how it’s related to my PR, although I should finish that bad boy soon…

Moving on to getting their latest build, notice how I have to actually download from their release channel, and test a number of things, probably functionally test these different edge cases against all the bugs that were fixed in the latest build:

It’s cool because I’m literally downloading the browser raw, running the .dmg package associated with this build, then testing those features. As instructed, I had to run something like:

arsalan.khalid@AMAC02V20QCHTD8:~/Library/Application Support $ cp -R brave brave-prod
to make sure i keep my old profile. It’d be useful if one could switch profiles easily… although this slightly befalls on me.

Getting started with testing

One of the first initial tests is to check the ‘signature’ of the build, I haven’t seen something like this before, which asks to check the signature:
arsalan.khalid@AMAC02V20QCHTD8:~/Library/Application Support $ spctl — assess — verbose /Applications/

Basically they’re looking for the following output:
/Applications/ accepted
source=Developer ID

Moving on to some more tests, it looks like we need to check that all of Brave’s defaults about pages load correctly. A neat little trick I just learned which enables developers to share their full environment, pretty cool,
about:brave returns something like:

It even uses that copy button I worked on in a previous pull request :)

I made sure to go back and forth with their dev team, basically just general interaction with them, as I was confused about going back to test the previous change-set thoroughly:

Test requirement:
Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave), this is pretty huge — do you expect a developer to test all of the changes in this release, similar to this format?

Fun fact, in the time I’ve been writing this blog post — I’ve already received a reply to the likes of:

Lauren added a useful note for completing one of the test, I didn’t know developers could do such tests using online cookie testers~ you learn something new everyday.

Also looks like the latest build was 12 hours ago, which is 0.66, so I re-downloaded from the releases and picked up from that instead, and now my libchromiumcontent has: 66.0.3359.117, probably the version they’re looking to test as they just deployed.

Fun fact, I had to test actually moving some of my own bitcoin into the browser to test that the deposit is still working, neat because I haven’t actually deposited coins into the browser up until now. Guess I was forced as a developer…

Makes you ask questions whether specific features even exist:
Change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting — where is this feature? Then I found it right in the advanced settings of payments :)

It’s possible that the description for this isn’t clear to me as a new tester, so if I find that indeed this feature does exist, then maybe it’s an opportunity to fix the wording here to make it clearer to anyone else where to find it.

Finally, it looks like I’ve tested something which actually doesn’t exist:
Visit for a few seconds and make sure it shows up in the Payments table — doesn’t show up on the payments table

I think I’ve caught something weird, as I’m seeing nytimes show up now:

its certainly longer than 10 seconds, this is weird. But you know, it’s possible that this is my fault, as the browser is recording the time I actually spend on the tab, rather than the time it is just sitting there left alone. I tested this by going to an article and reading about it, which was a sad incident in Toronto actually. It’s even more sad because the perpetrator was a classmate of mine for the past 5 years, truly disheartening and saddening incident.

In the upgrade section of the tests, not too sure how to do these bits from the UI:

· Upgrade from older version

o Verify the wallet overlay is shown when wallet transition is happening upon upgrade

o Verify transition overlay is shown post upgrade even if the payment is disabled before upgrade

o Verify publishers list is not lost after upgrade when payment is disabled in the older version

I’m not sure if this is asking me to actually backup or recover my wallet, its possible that the wording of performing this test isn’t easy for a noob.

One of the tests asks to visit any YouTube video in a normal/session tab and ensure the video publisher is listed in ledger table, which does bring to attention something strange. If you look at the youtube video I’m watching, peep the time:›

Now notice the time added on ledger:

Strange that they don’t match up exactly, I would expect a slight difference, but this is a fairly obvious difference. I think time recording on media content in Brave has still some time to go, in general. I notice this to be the same for embedded youtube videos too:

And if we look at our payments ledger we find elapsed time to be:

It’s going to take some time to get this right, as there are obviously nuances here, how do you prove the amount of physical watch time with these videos, especially even embedded ones! What if the user is looking somewhere completely else, HCI in practice people!

There’s even a bit to try out the sync feature of brave browser, which is good for me as a new user, but it doesn’t seem very intuitive to setup. Following these steps:

And now if I go to my iPhone, I don’t see any of the referred settings in this list:

Just goes to show that more development, and clearer descriptions are still to come for testers and users respectively.

Guess this means I have to skip all the sync tests… as they’ve confirmed it too :) Always good to get community feedback!

Appreciating About Pages

Another fun fact, who knew Brave had a more detailed Ad Blocker view at:

about:adblock, thats neat, I like all of the things they do with the information exposed through the about: pages.

Hotkeys Sugar

Going through the tests has also brought some cool hot keys to my attention that I haven’t used before like:

Reopen the latest closed tab: Command + Shift + t (macOS) || Ctrl + Shift + t (Win/Linux)

Jump to the next tab: Command + Option + -> (macOS) || Ctrl + PgDn (Win/Linux), I’ll probably be using this one the most.

Jump into the URL bar: Command + l (macOS) || Ctrl + l (Win/Linux)

Pinning Tabs on Brave

Running through these functional tests has unlocked a plethora of features and capabilities I didn’t know existed on Brave, who knew you could ‘pin’ or ‘unpin’ a tab, which basically means that tab will always be available for you to access when opening the browser.

I think especially as developers, we get lost in have hundreds of tabs always open, a simple feature such as this is most useful (notice the Brave logo on the very left):

In closing, this was a cool test to run through, it gave me better insight on using the browser, but as well as how different pull requests could break many things. Maturer UIs such as brave obviously needs a lot of work and testing, and this was a very cool general glimpse of that. I feel grateful for the opportunity to learn, but its important to stay reminded to deliver. I have to be sure now to complete this test in the format the dev team expects, hope this was helpful to any readers learning how to contribute to open source projects.

Final results of test posted here:

Cheers, Arsa.

by Arsalan Khalid at April 24, 2018 05:59 PM

Qiliang Chen

OSD - Release 03

In this release, I'm going to find projects which I'm interested. In this release, I'm going to fix some bugs if possible. I hope to find projects to keep contribute not only for release but also in the future.

I really like Android projects. So I want to find some Android projects to work on. I Googled 'Android open source project', and I found some articles introduce it.

I tried some of the projects. I reviewed some issues information. At last I choose Leafpic to contribute.

Leafpic Introduce

Leafpic is an Android open source project provide some basic features for photos viewing, managing, editing and sharing. It's a light application running very smooth in Android devices. It has Simple interface and I like it very much. It has around 1300 commits, 9 releases and 50 contributors. It's a good project for our practices. 

Potential Bugs to Fix

1. Leafpic: Rate app option is not functional.
    Issue link:

2. Leafpic: Translate to Chinese.

Fix Bug:

1 - Leafpic: Rate app option is not functional.

Issue link:

This option is to let the user go to the Play Store to evaluate this software. However, this function cannot be achieved as promised. The following video shows this issue. 

I tried this application and found that the problem does exist! So I tried to fix it.

First, I need to find the code which handle this function. I first went into layout 'about' cause this option in in 'about' option. I did find it. 

The rate option has id 'about_link_rate'. I thought when we click 'rate' option, the application would take some actions with element with this id. So I tried to look around every code relative to this id. I greped this id in terminal and I got this:

There're two part of code relative to this id. One is what I've just mentioned in about layout. The other is in '' file. The original code was:
So yes. This is the code to handle rate function. 

I did some research online and learned this code. Specially I learned a lot in this site: 
I found the most rated answer is very useful. What it did is to Play Store app for rating. If the phone has no Play Store app, it will open in web browser instead. This will be more comprehensive because not all phones have Play Store. I used this code to help improving this application. 

But rate option still out of work. I thought the 'getPackageName()' function did not work. I did some study for this function. In Google's document, I found information for this function:
It should work. I had no idea why it didn't work. 

It try to use application name directly instead of using function to retrieve the name. I went to Google Play Store and searched this application through web browser. The reason I used web browser was I needed to get the address of it. I got this:

The part following after 'id=' was what I want to find. I substituted this to 'getPackageName()' function. It worked!

Pull request link:

2 - Leafpic: Translate to Chinese.

I also did some translation to this application. The project's translation is handled Crowdin. The website's link is here:

Before my translation, it's 77% translated to Chinese.

After my translation, it's 93% translated.


Through this release, I tried different Android projects. I understand and join one of the projects and be a contributor of it. I learned to use different skills to determine the location of the bug. Such as using 'git grep' command to find relative code. I also learned to analyze the links between different modules. It takes a lot of time to understand a project. But once you understand it, it's relatively easy and fun to contribute to it. I will continue to contribute to it and continue to explore more interesting projects. I hope to become an outstanding Android developer by contributing to open source projects. 

by Chan Kignor ( at April 24, 2018 05:44 PM

Jeffrey Espiritu

SPO600 Project Follow Up

Inline Assembly Update After benchmarking the inline assembly changes I made on the BBetty and AArchie servers, I was rather surprised that the modified code actually degraded the encoding performance by significant margins. So I reasoned this may have to do with the slow memory access on BBetty and CCharlie in comparison to AArchie because … Continue reading SPO600 Project Follow Up

by jespiritutech at April 24, 2018 04:49 AM

Evan Davies

Release 0.3 - We All Need Some Space

For Release 0.3, I originally planned on working on another Brave desktop browser bug, but after perusing the open bug list, I wasn't sure if I could find anything that really took my interest. Thus, I began to search.

Toiling through Github's repos, looking for a project that was in javascript (a current preference for myself) I came across an interesting concept - an in-browser debugger. This project, aptly named Debugger.html, allows users to debug a web page in real time, without any other external programs. This means you can add breakpoints, run commands, etc.. in a more functional environment as opposed to the "inspect element" function of commonplace browsers. Currently, the project is focused on Firefox, but a Chrome implementation is also in progress.

The bug that I had chosen to work on regarded a visual issue, where there was no padding for one of the panels on the page. This caused the information inside to clump together, and did not follow the padding rules that the other panels followed. The actual issue page can be found here.

My initial thoughts going into this were as followed:
 1. This was likely fixed through CSS
 2. There are probably quite a few CSS files to wade through
 3. Inspect Element will be a good friend

After some searching, testing, and fruitless efforts I noticed this class:

It appeared that the list of elements were all embedded in this "accordion" class. As such, I began to look around Accordion.css. I had found a class inside that matched up with the element spoken about in the issue. As expected, adding a padding to this class worked! I uploaded the file to my forked repo, and issued a pull request. I thought that this would be the end to the bug, and considered looking around for another bug to work on. As I was however, I received a message from one of the developers, stating that changing this class' Css would create a multitude of issues in other areas that I wasn't aware of. He did however, suggest a .js file that might point me in the right direction.

It was back to the drawing board, but with a hint towards what I needed to look for, I was optimistic. As it turns out, the file that the developer had suggested turned out to be the wrong file, but it WAS implemented next to the actual file I needed to look at! This file did not have any connection to a css file, which would explain the fact that there was no padding. I added an import to the Css file in charge of the "Secondary Panes" (The right sidebar elements). In addition, I added a class in the Css file that would catch all divs inside the specific class and add a 4 px padding. A compilation later, and everything was working! I reverted my original changes on my repo and added in my new ones. As of now, I am waiting on a response from the reviewers to see if my request is able to be merged. 

Changing the project I worked on for 0.3 was a refreshing change. Although it was another browser related project, it had a nice change of pace. It allowed me to hone my investigative skills, as well as my understanding of how larger projects function. I will keep in contact with the reviewers, and perform any further changes needed to have my pull request accepted.

by Evan Davies ( at April 24, 2018 04:05 AM

Zhihao Cai

TDD Practice in Brave

For this lab, we’d like to practice the TDD (Test Driven Development) on Brave URL bug.  Essentially, TDD is the process of test-first development, making our code passing the test we just created.

After you start your brave build, heading to the URL bar and input cat

enter (space in between), you will notice the different result compared with the behavior in Chrome for example.

It turns out Brave doesn’t take care of the spacing in the query string. Instead of returning a search string with “dog%20cat”, we actually got 2 separate string “dog” and “cat”.

Once we have our desired result, we can now add our test case for this specific behavior in test/unit/lib/urlutilTest.jsScreen Shot 2018-04-23 at 10.57.22 PM.png

Note the urlUtil inside the assert statement, it gives us hint where the code might sit. So heading to the js/lib/urlutil.js, navigate to isNotURL function, let’s make changes right before the UrlUtil.getScheme(str)Screen Shot 2018-04-23 at 11.15.10 PM.png

By issuing npm run test -- --grep="urlutil", our test should pass and also the bug should be fixed now
Screen Shot 2018-04-23 at 11.29.29 PM

by choy at April 24, 2018 03:22 AM

Justin Vuu

OSD600 – Lab 6 – Fixing a Bug And Adding Tests

In this lab, we fix an issue in Brave and then build tests for our fix.

The Issue

Brave parses text entered into the URL bar to determine whether it’s a URL or search term. However, there is a bug that if a space exists anywhere in the string that’s not at the beginning or end, it assumes it’s a search string. This means entering “ cat” will cause Brave to think we’re literally searching for “cat” and “;.


Current build of Brave


For comparison, other browsers like Chrome sees that as a URL by replacing the space with “%20”.




The Fix

Fixing this was really simple: Add a line into urlutil.js that replaces all spaces with “%20”.

And now, URLs with spaces in them will be parsed as URLs instead of search strings.


My build of Brave with str.replace



After running a test, we find that there are some tests in place that checks that text with spaces in the URL bar should not be considered a URL. By editing these tests to return the opposite – because they are being treated as URL by the browser – the tests pass.

by justosd at April 24, 2018 03:04 AM

Bakytzhan Apetov

Release 0.3: perf.html tool

For my last release of Open-Source course, I decided to contribute to a project called perf.html, a part of Mozilla‘s devtools.

screenshot.pngperf.html interface

This is how devtools team describes perf.html:

perf.html visualizes performance data recorded from web browsers. It is a tool designed to consume performance profiles from the Gecko Profiler but can visualize data from any profiler able to output in JSON. The interface is a web application built using React and Redux and runs entirely client-side.

Mozilla develops this tool to help make Firefox silky smooth and fast for millions of its users, and to help make sites and apps faster across the web.” (Source: devtools-html)

First, I wanted to tackle the issue #948:


This issue happends because of the way devtools team defined the render() function in CallNodeContextMenu.js.


Notice the <ContextMenu> tag is getting rendered regardless of how many nodes to show. It says in the comment that “ContextMenu expects at least 1 child.” I’ve tried changing this function in several different ways. For example, checking this.state.isShown before rendering, but I couldn’t get the desirable result without breaking the code because the menu expects minimum 1 node for rendering or otherwise the menu won’t show.

Next, I made a contribution to some of the documentation for in the perf.html. The issue is #937.


I fixed the labels from Good First Bug to Good First Issue and fixed the links to Issue Page. I’ve also responded to change requests from one of the devs. You can see my Pull Request here.


Overall, it was a good experience studying in Open Source course. I want to express my thanks to our professor for introducing to us all of the fundamental things to know and practices used by open-source community. I wish there would have been more opportunity to work on bugs like I did in my Release 0.2 for debugger.html.

This course have also build a stronger foundation of JavaScript knowledge for me. For example, for my Release 0.1, where I learned more about Node.js, Express, routing and testing for building an API with use of Google’s libphonenumber. Many of our labs have also used extensive amount of JavaScript and related frameworks.

I learned a lot about Github&Git workflow. I especially memorized the workflow of “fork, clone, build, fix. add, commit, push” procedure, and I realize its importance for my future work in software development. This is it for my Release 0.3.

Thank you!

by Jean A. at April 24, 2018 02:36 AM

Zhihao Cai

Learn from the Code Infrastructure

For the last release, I was looking into the Mozilla GitHub repo and hoping to find some bugs to fix. Since most projects in Mozilla are split into small components, therefore, there are relatively a lot more miscellaneous bugs compared to the centralized VSCode and Brave project repo.

As for my growth goals, I want to take this chance to learn from the infrastructure and development cycle of the open source project.

The first bug I was working on is to add CSS lint support to the Blurts Server‘s infrastructure. Blurts Server is a node.js prototyping project for Mozilla Breach Alert feature. The fixing went straightforward since the issue page already gives out the solution. All I need is to read through the stylelint repo and understand the basic usage and apply it to the project.

Basically, the package.json is the core when working with node.js, since we not only use it to include dependencies but to define behaviors. Not surprisingly, many of the js libraries are able to work together, which makes the infrastructure changes a lot easier without extra modification.

For example, we just need to insert line #7 in the “scripts” section to enable “npm run lint” command to trigger our stylelint check. And we could customize our CSS checking rules by simply adding the “.stylelintrc” file.

The second one is to remove unused references from the Kitsune, which is a Django application. I haven’t done any Python before, but I didn’t find it so hard to perform a clean-up.

By running the “git grep” command with -il options, I can easily identify where the references are used.

$ git grep -il 'treejack'

What really interests me is the “product.html“, unlike normal HTML, they use {%...%} as open/close tag. After research, I figured out the “jinja” in the file path is actually a template engine for Python. And the {%...%} blocks represent the control structures, for instance, for loop and if…else statement.

The last issue I encountered is similar to the second one – do a bit of clean-up, while the topic is about logging in JavaScript.

What I have learned:

As part of the development, a project may need to utilize a various set of tools and libraries to fulfill testing purposes or doing some proofs-of-concept, but as the project progresses, constant monitoring and code maintenance is inevitable.

Learning how to set up and categorize the infrastructure is as important as implementing new features since it certainly provides convenience to maintain code health and adding further supports.


by choy at April 24, 2018 02:13 AM

Aliaksandr Ushakou

Release 0.3

The goal of this release is to contribute to a real open source project.

An issue that I’ve decided to tackle this time is “Trying to save page offline always shows Downloading…”. Actually, I had been working on this issue since Release 0.2, but before now nothing seemed to work. (By the way, the project is Brave browser)

So, the issue says that if we load a web page while online and then try to save it while offline, downloading process is running forever and no error will ever be shown.

Let’s try to reproduce it!


And yes, the issue is reproduced! It means that we can try to fix it.

First, we need to find a code block that is related to the downloading process. Usually, if I have no idea where to start searching, I just use the search bar. For example, we can try to search some key words like “download”, “downloading”, “save file”, etc. Ok, let’s say we’ve found a code block that might be what we need. But how to make sure? I think the best way is to set a breakpoint and try to download a web page. If the breakpoint is hit, it means that we found something that is related to the downloading process. Of course, It doesn’t always mean that found code block is what we eventually need, however, it always means that we are somewhere close to it.

The file that I found interesting is  filtering.js . I found out that there is the ‘updated’ event, that is triggered every time I try to download a web page.


So I decided to work on this file.

The obvious idea that came to my mind is to check network connection when downloading process starts. If there is no network connection, downloading process should be cancelled or interrupted. So I started to work in this direction.

I found out how to check network connection and added two events that do it.


After that I changed the ‘updated’ event to include the logic that interrupts downloading process if there is no internet connection.


Ok, time to check if it works.


And it works!

That’s it for today.

Pull request can be found here.

Thanks for reading and take care everyone!

by aushakou at April 24, 2018 01:52 AM

Hao Chen

Tricky Javascript with a sprinkle of React

Tricky Javascript with a sprinkle of React

This week’s blog will summarize my 1+ month journey. I will be tackling this issue within Debugger.html.

The preview gets stuck when the cursor moves quickly over a variable in debug mode. The issue is quite hard to reproduce as it doesn’t always get produced each time the cursor slides over.

I was provided with a possible lead on where things might’ve gone haywire. Exploring the stack has lead me to onMouseOver, where it detects whether the cursor is over a variable. I noticed that this mouse event is attached to something called codeMirrorWrapper. This is a giant invisible mask that covers the entire debugger editor. Also, this function is a debounced function. Check out this link to learn more about debounce. So my initial thought is that the call to updatePreview() was somehow late to the party, using an old Event Target due to debounce. But removing or increasing the timer does not make a difference, so I moved on from this.

Since I was struggling to find a thread to hang on, I thought to myself maybe this bug was introduced in a past commit. All I have to do then is decipher one specific commit.

time to go back in time…commits, close enough.

The term is git bisect. My professor’s blog post is very helpful in getting me started. What I discovered shocked me! The issue existed from the very moment the functionality to preview variables was introduced. So this issue wasn’t a regression of any sort. That much was confirmed.

Moving on…

I began to pair with a few mentors within the debugger community. The following summarizes some of the things we’ve tried or considered:

  • Adding hover events to each individual variable during debug mode? Wayyy too costly in terms of performance(if the codebase is large).
  • Ignore default behavior with Event.preventDefault().
  • Adding/removing async/await to the updatePreview() call.
  • Adding additional mouse states such as onMouseEnter and onMouseLeave to code mirror. Doesn’t pick up individual variables, the call only triggers when entering/exiting the editor mask.

At this point, I’m stumped along with the devs I’ve been pairing with. Time to get my hands really dirty. I began to spam console logs within the mouse events. I noticed something fishy… take a look at the below description of what I observed.

I’m not so sure why another set of onMouseOut and onMouseOver is called. So I added a onMouseOut event that contains the same logic as onMouseOver. Also, I removed a flag in updatePreview() to produce the following.

I got rid of the Preview…but this solution is far from being correct. The yellow highlight still remains.

I made a pull request just to showcase some progress, but I’m hoping to find the root cause in the near future.

I noticed that a class is added to the variable for the CSS to highlight. Further digging around lead me to a componentDidMount() call that marks a specific range of characters with this class. From here, I took a moment to explore a quick overview of React and the lifecycle of a component. Once again, I spammed the lifecycle calls with console logs. The popup was being rendered right after being unmounted. Which leads to componentWillUnmount() not being called to clear the marker(for highlight).

This is how far I’ve gotten. I look forward to continuing to tackle this issue in the near future.

TLDR: I’m still dealing with a tricky JavaScript issue that is hard to reproduce and hard to pinpoint the cause. Got the awesome opportunity to pair program with 3 other developers around the globe. Pushed myself to persevere and break problems down + learned a lot about JavaScript, React and Redux!

Tricky Javascript with a sprinkle of React was originally published in Haorc on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Hao Chen at April 24, 2018 01:27 AM

Sean Prashad

A Challenge to Myself

The view from 8 months ago

25 lessons that OSS has taught me over the past 8 months

Here I am at the end of an 8 month long journey; a road trip that originally was set to be 4 months but ended up being one hell of a ride that I couldn’t resist a second serving. 20 blog posts later, I realized that I’ve learned a lot and I want to share it with you.

Without further adieu, here are 25 lessons that I’ve learned from my journey in Open Source:

  1. Open source has no rubric — Landing a merged PR doesn’t get you an “A+” or “B+”. Rather the process is more valuable and as such, I derived way more enjoyment from putting the puzzle pieces together rather than the final image I saw.
  2. A “bug” is more than what you think it is — Most people think bugs are undesired features but it encompasses things like spelling errors and unclear documentation!
  3. Not knowing a tool/technology shouldn’t scare you — Many students think that if they don’t know a language such as “JavaScript” or “Python”, they can’t contribute to those kinds of projects. You’ll need to use another excuse! Getting involved with documentation was how I first contributed to Rust when I hadn’t written a single line of Rust yet.
  4. The meaning of “community” — Community is something that I was fortunate to have experienced. Mozilla’s AMO project has been nothing short of amazing. The devs have invested copious amounts of time into guiding me through a completely new codebase to help me succeed where I once failed.
  5. Documentation is what turns good projects into great ones — I’ve come to appreciate clear and concise documentation in projects. Even more so, clear examples are a godsend!
  6. Explaining technical work without technical jargon is challenging — I’ve gained a much greater appreciation for conference speakers who speak to a general audience. Our bi-weekly demos since January have opened my eyes to the skill that it takes!
  7. Seek first and seek well — When friends ask “How do I do x?”, I think to myself “Why are you asking me? That information is available somewhere!”. Now this isn’t to say that I think my friends are helpless, but rather that I’ve developed a mindset to “seek first and seek well” before asking questions.
  8. Open Source allows me to give back — Through finding my own success, I’ve been helping others find success too! Hao, Jafar and Chaya needed some help when things got bumpy, but once they got going.. well you can see for yourself in each issue 😁
  9. Every bug is a story waiting to be told — Every bug has it’s own unique story behind it — and it’s up to you to help write the epilogue! What’s even more awesome is that I’ve been able to share my stories during interviews to score both technical and behavioural points 😎
  10. Hands on experience that employers want to see — The technical know how gained through solving bugs whether it be from documentation, front-end, back-end, tooling and so much more is something that employers love to see!
  11. One of a kind learning experience — The courses taught by Dave are stimulating, challenging, rewarding and have truly been one of a few highlights of my 8 year Senecan career. I always looked forward to new material that was relevant and up-to-date with what was happening in the tech landscape.
  12. Blogging — Blogging was something new and unfamiliar to me. I have to admit, it’s a lot harder than I thought trying to translate information into words. The great thing is that I’ve left my mark on the web for everyone to read!
  13. Live-streaming is fun!— I’ve found that I prefer live-streaming my work on bugs rather than blogging about it. The biggest downside? Nobody else wants to watch me spend hours on end to fix a bug… 😶
  14. Open Source is a lifestyle — I can easily see myself working full-time in a community as welcoming as AMO to help give back to those who were in the same place that I was 8 months ago. Open Source is the lifestyle of working in the open and embracing the community.
  15. Networking — Networking is something that not everyone is great at but I challenged myself to attend at least one event dealing with Open Source this semester — the end result? A handful of us visited back in March and Mozilla’s Toronto office in mid-April! Take the opportunity to ask and you never know what might happen!
  16. Sharing the experience — Engaging with the community via Twitter is something that I wished I did from day one. “Facebook is the people you went to high school with. Twitter is the people you wish you went to high school with.” — David Humphrey, 2018.
  17. Standing out — Like I mentioned back in #9, my work in the Open Source realm has been a focal point of conversation during interviews for co-op. More so, it has even helped me to stand out amongst UofT/Waterloo candidates 💪🏽
  18. Never stop learning — Every week there was something new to learn in class — from linting to licenses to Git and so much more — Check it out here!
  19. Open Source brings like-minded individuals together — Surprisingly, our class was only about a dozen students but everyone was here because they wanted to be, not because they had to.
  20. My work is available for anyone and everyone — Through my 8 months of Open Source, I’ve cultivated a portfolio of work that includes over 20 blog posts, 18 landed patches with 5 WIP! See for yourself by searching me on Google!
  21. Starting can be hard but it’s very rewarding — It’s very intimidating to start but once you land your first bug, the feeling is like no other. The key? Be humble and understand that your first bug might be very small but know that you’ll continue down the road to more complex ones in time.
  22. You never know whose watching — Because you work in the open, you never know whose watching! I experienced a surprising message back during one of my bugs for AMO in which I was vouched for my work! Check out my tweet here.
  23. Create a Twitter and follow topics in the OSS world that interest you — I’ve learned a lot just from following individuals on Twitter! Check out @bork and @MargoChepiga for starters!
  24. You can work on any part of a project if you want to —For some projects, you can immerse yourself in anything from UI to documentation to linting to tests! Go wild!
  25. Code literacy — Being able to read someone else’s code, whether it was written 10 years or 10 days ago, is a crucial skill as we’ll have to work with others in the future! I’ve been practicing this for months and have a good idea on where to start searching for features in code using things like git grep!

And so… 25+ demos given, 21 blog posts authored, 18+bugs fixed, 2 field trips attended and 1 set of stickers later, here I am 😁

Phew.. that was a lot to say. So with all said and done, this post will serve as a memoir to my future self to never give up when things get tough, to always keep learning and to continue giving back to the next generation. More importantly, let this post serve as a challenge for me to come out with another 25 lessons learned in the next year.

Like they say — once something’s on the internet, it’s there forever… Now I’m wondering if I bit my tongue too soon..

Ah well, onto the next chapter! 😼


A Challenge to Myself was originally published in Open Source @ Seneca on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Sean Prashad at April 24, 2018 01:04 AM

April 23, 2018

Margaryta Chepiga


Have you ever had a situation when you finally fixed a bug after putting an enormous amount of time & effort? Rhetorical question, isn’t it? I am more than sure that the answer is yes. Do you remember how you felt? Do you remember that exact moment when you realized that you found the solution and it works?

When I found the solution to this bug ( blog post about the issue is here ), I felt ( and reacted ) approximately like this:


In almost every story, there is a but. I found a solution. I checked it. I double check it. And then I checked it again. I couldn’t believe that I did it. I send a pull request, added screen captures and then I got the best feeling ever. The feeling which is the reason for me waking up every morning and the reason for me to not to sleep at night. I felt accomplished. I felt like I did something today. I felt complete. With that, I finally went to sleep.

It was a terrible night. I was sleeping and not sleeping at the same time. All night long, in my dream, my solution was not working. For various reasons. All night I had a feeling that I did something wrong, a mistake. That in reality my solution is either wrong or I just mistyped something and solution is not even a solution. Then I thought that I haven’t fixed it at all, and it was only a dream. So I would wake up in the morning and the solution would not be there as well as the PR.

I woke up before my alarm ring. Went straight to my laptop. I had to know that it is there. I had to know that I fixed it. It was. I felt relieved. But not for long. At the back of my head, I had this annoying feeling that

  • It is not a perfect solution
  • It is wrong, you just don’t know about it yet
  • There must be an edge case that I haven’t covered yet

The weird thing is that my feeling was right. In a couple of days my PR was reviewed and not only my solution was not the best, as it turned out later it was causing a bug.

So the original code was:

My first fix was the following:

Where I basically checked if the url is a new page url, if not then we won’t reset the state.

Even though it looked like it worked ( as in icon was not disappearing anymore ), it didn’t.

According to the various console.log’s the

getBaseUrl === getTargetAboutUrl('about:newtab')

would always return true.

After a couple of hours of debugging & a couple of “try and fail” I found out that if I put the result of the statement in the variable and use a variable instead, the result won’t be always true. Fix number two:

Which means that it works as expected. However, this solution was causing problems too.

I was devastated. I’ve spent so much time and effort. Thought that I found the solution. Twice. But still, it wasn’t it. Give up? Move on to another issue and just forget about it? I couldn’t. After weeks of debugging, understanding the code, involving other people, I just couldn’t drop it. There are certain situations and issues when it is a smart decision to move on. This one felt like it wasn’t. It was certainly hard to keep going. Hard to not give up. But can I grow and learn without overcoming the obstacles? Should I just take the easy way and do things that are familiar and easy for me? What will that decision give me? How would I benefit from it in the future? Apparently, I am just not that type of person who gives up and looking for an easy way. I knew that before, otherwise I wouldn’t be where I am right now, but not always I thought that this is a good thing.

I kept looking for a proper solution and I think I found it.

Originally in windowStore.js we had the same code but without the extra, if statement that you can see above. So basically, I checked if app download action was performed and if not then we want to reset the state. Result? It worked.

Looks extremely easy. Works perfectly. Was it though?

To be honest, I am still not 100% sure that this solution is the best solution. My PR was not re-reviewed yet. Therefore, to sleep better at night I checked all cases that I could found and made sure everything works as expected.

Be Brave. Don’t give up. You are not a failure.

Failure? was originally published in Open Source Adventure on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Margaryta Chepiga at April 23, 2018 10:42 PM

Hongcheng Zhang


In release 3, we are asked to continue working on real open source projects, but more. It means we need to fix or contribute more issues than release 2 to show a degree of growth. So,  I decided to fix more than two issues in different area from release 2. I found and contributed three issues.

what  I have done

The first one is Mozilla Science Lab. It is a community of researchers, developers, and librarians making research open and accessible.

The second one is Mozilla Office. Public Corsica instance for Mozilla office and home offices. If you are at a Mozilla office, this project is what powers the content on the flat screen TVs throughout office.

I found two issues about https. Http is a protocol that allows communication between different systems. It is used for transferring data from a web server to a browser to view web pages, but there is a issue that data is not encrypted. Therefore, we strongly suggest to use https, the ‘S’ means secure. Https involves the use of SSL certificate. It creates a secure encrypted connection between the web server and then web browser.

There are lots of insecure urls in above two projects. They want to convert the urls where already support https to https.

When I try to convert http to https in first project. I found more than 100 http urls, so I use switcher in VScode to convert all http to https. I did not realized some http urls sill not supported https. Of courses,  something is broken including links and images. Therefore, I have to go back on the origin one and double check the transform one by one to make sure each one is good.

Here is the first PR and second PR.

The last project that I contributed is Kitsune, which is the platform the powers SuMo( There is a url pointing a Troubleshooter addon in which currently returns 404. It is not available. Therefore, I remove all references to troubleshooter addon. Here is my PR.


That is my last semester. To be honest, this course is most useful that I have took. I mean I attend college for finding a job. I have learned some language including C, C++ and JAVA etc, but all is basic and  I have to improve these skills to satisfy job requirement, but OSD600 is really related with job. In release 1, I created open source RESTful API and familiar with how to use GitHub, In release 2 and 3, I contributed real open source project. These real experiences what job required. Nice Course!



by hongcheng1993 at April 23, 2018 10:37 PM

Woodson Delhia

Open Source Release 0.3: Miso-Haskell Function Helper

 This is a blog is about my last open source contribution for my open source course.

About The Project

The project that I decided that to contribute you to is called Miso. Miso is an open source front-end framework, written in haskell, for building interactive single page web application. Miso is heavily inspired by The Elm Architecture (TEA) and also uses GHCJS to perform Javascript FFI. One thing to also note Elm is also built in Haskell, however, also of the Haskell great feature haven been cut down to be the framework easy to handle for beginners.  Here is the github repo of Miso link where you can find more information.

Why Miso?

I have been looking for the past couple weeks for a small front-end framework written in haskell that I can use to create a small drag and drop file upload widget. The intention is to connect my sforce-migration tool with the widget. My small library will parse yaml <-> XML salesforce projects. Due to the lack of documentation about salesforce front-end framework I initially had the intention to write the widget in Purescript. However, this would have also ended with me re-writting my library in purescript and losing some of the strong library within Haskell ecosystem. After countless of search, I finally stumbled on Miso’s webpage then decided to give it a try.

About The Helper Function

Setting up Miso was quite simple (I was actually expecting a painful process). After setting up Miso and starting the starter project that was provided, I decided to tackle the implementation of the drag and drop widget. Miso is very intuitive, the framwork allows us to create HTML element with nice helper function such as div_ and h1_ pretty much the all the standard html tag element names with an underscore.  Below is the function type signature for the div_  function but most element have the same type signature. Each element accepts a list of attribute that may trigger an action and it also accepts a view a action which is pretty much the content within the element , and finally it will return a view action after receiving the two arguments.

The way to construct an element with attribute requires that we import the Map module and use the singleton function which creates a Map with a single key as the attribute name and the value as the value.

However, combinig attributes can really get tidious, so I decided to create a helper functions using the Monoid and Map Module.  The function (<>) from the Monoid module is equivalent to mappend. So, I thought about creating a similar helper function to facilitate the way we combine attributes and also make it easy to read. Thus, the creation of this smart Attribute constructor  (=:). It abstract the M.singleton function for us and allows us to use the function as an infix function. Which means instead of

functioname arg1 arg2

we can use the function like so

arg1 functioname arg2

and we can now use the (<>) to combine them like so

(arg1 =: arg2 <> arg1′ =: arg2′)

Below is a code example.

And that is pretty much it! Here is the link to my PR We can also see my code here in the Miso.Util module


by Woodson Delhia at April 23, 2018 08:47 PM

Bakytzhan Apetov

Lab 6: Fixing White Space Search

For this lab, we’ve tried to work with Brave Browser and how it handles white spaces in the URL bar.

We noticed that while Google Chrome or other browsers return white space after searching this string: “ cat”, Brave Browser returns in with %20 representation of white space. We needed to fix this bug.

I followed the standard fixing-a-bug routine by forking the Brave Browser repo, cloning, and changing the file contests.

What I changed is:

  1. I added the function to replace white space in js/lib/urlutil.js.


2. Then added a test case for this bug in test/unit/lib/urlutilTest.js (I moved the second string on the next line because it wouldn’t fit in the screenshot otherwise):


So after that it was working:


by Jean A. at April 23, 2018 08:24 PM

Justin Vuu

OSD600 – Lab 2 – VSCode

For this lab, we installed Visual Studio Code as well as built our own version of it.

VSCode has proven to be a very useful… lightweight?… tool in coding throughout this course. Being able to code, build, debug, and test code in VSCode has made developing code much easier!

I didn’t install any extensions. I found that what was available by default gives me everything I needed for this course up to now. Perhaps I should explore what extensions are available.

Building my own version of VSCode

I did have some difficulties at first with trying to build VSCode on my machine. Mainly it had to do with the prerequisites. After scratching my head for a second, I decided to just uninstall the prerequisites and try again from the top. I’m not sure which step I missed or did wrong, but the build completed successfully the second time through!

Live Debugging

Arguably the best part of VSCode. It took me a while to get the hang of it at first because this was all new to me. Even in INT422, which we used Visual Studio, I never used the live debugging feature.

Now, I used the live debugging feature when working on releases 0.2 and 0.3, as well as lab 6. Being able to see what was going on with the code while being able to make changes to it live was like magic. No joke. I can’t go back to the old ways of Notepad++ and Vim, saving, building, testing, and then manually figuring out what happened.


This is also when I was formally introduced to Electron. I have used another program built with Electron – Discord – but I never knew what it was back then.

So what is Electron?

It’s an open source framework for creating desktop apps like it’s a web app. Essentially using HTML, JavaScript, and CSS to make desktop programs.

by justosd at April 23, 2018 08:13 PM

OSD600 – Release 0.3

For this release, we were tasked again to contribute to an open source project, with the idea of doing something “more” than in our previous release. “More” in this case means doing something different or more challenging so we can grow as contributors.

Returning to Brave


I decided to focus on Brave again for this release because I was already familiar with the project from before. Fixing the issue I chose for Release 0.2 has taught me a fair amount about Brave’s inner workings.

Growth Goals

In order for us to grow, we had to aim higher. We were given some suggestions of goals to help us, and these were the ones that I chose:

  • get more involved in the community
  • to work on more bugs than last time
  • to gain more experience in different areas of contribution

Originally, I picked to work on more bugs than last time. However, due to the time it took to discuss the first bug I took on, I figured that working on multiple code-related issues would not be feasible. In order to achieve my first goal, I also had to look into another area of contribution and that was to update their documentation.

Achieving My Goals

Joining The Community

For Release 0.2, all I did was comment on a triaged bug that I wanted to work on it and then created a pull request. I never got involved with the community at all.

This time I joined their Discord and took part in discussions. I also chose to work on a more recent issue that was getting some attention. I brought up the possibility of localization issues that the fix would introduce, as well as my approach to resolving the issue.

Working On More Bugs

Working on more bugs seemed like it would be simple at first. However, as I mentioned in the previous section, it did come to a point where it didn’t seem like it would be possible. Getting feedback was pretty quick at first, but as the week drew to a close, responses were taking longer and eventually I got no responses at all. Brave is currently undergoing a big upgrade so it’s likely all team members were focused on that.

In order to achieve this goal, I had to find issues myself. I assumed that finding code-related issues would be very difficult, so I found issues in their documentation instead. This would be more beneficial to me as I hadn’t contributed to documentation in the past, and I can make that a growth goal!

With those two issues, I’ve basically achieved this goal. I know it’s only one more than my previous release. I did originally aim for 3, but I had to downscale due to time.

Gaining Experience In Other Areas Of Contribution

To achieve this goal, I went through Brave’s documentation. I originally expected that I’d only be fixing the odd typo or grammar error. Luckily, it didn’t take long to find a document that was outdated and had a glaring mistake.

My Contributions

Improving About:Passwords

This issue was filed by a collaborator. In Brave, about:passwords is a page that lets users manage the passwords the user allowed the browser to store. At the top of the page, it instructs the user where to go if they want to change how their passwords are stored.

Context menu on Mac

Currently, the page suggests users go to Preferences > Security. In some ways, there’s nothing wrong with this because, on MacOS, Windows, and Unix, the name of the menu to access the Security section is called “Preferences”. Additionally, the URL to get to Preferences is “about:preferences”.

The issue occurs when users try to access Preferences through the context menu. On MacOS, the option in the context menu to get there is aptly called “Preferences”. However, on Windows or Unix, the same option is called “Settings”. Now the instruction may not make sense to some users on using either of those two operating systems. Savvy users may figure out that it means “Settings” because it leads to about:preferences. Other users might go looking for a “Preferences” option.


Context menu on Windows

There are two ways to fix this: Either remove the check in the context menu that checks for the OS and changes “Preferences” to “Settings”, or add a check to about:passwords that changes the instructions. I assumed that there was a reason for the different name and that the check was added in later in development. With that, I approached the issue with the second option.

Working On The Solution

There are three files responsible for the passwords page:

  • about-passwords.html – the page that is loaded but we can ignore this file
  • passwords.js – renders the content. It’s referenced by the HTML file, and uses strings from…
  • passwords-properties – the localization file

Currently, in passwords-properties, the string for the instructions is stored in one variable.

This needed to be split into three: One that holds the instruction that is common for all three OS, one that holds part of the instruction specific to MacOS, and one that holds the part of the instruction specific to Windows and Unix.

In passwords.js, I needed to modify this block of code that changes which instruction is displayed depending on the OS.

First I needed to import “isDarwin”, which is a function built to check if the OS is a Mac.

I changed the above block of code so that the text is in two <span> tags inside the <div> at line 232. The first span would have the ID matching the common instruction, and the second span would use an inline condition statement to change its ID depending on the OS.

The user who reported the issue also suggested making the instruction a link that takes the user to the Security page, hence why the second span has an onClick property.

I added a bit of styling to make the link apparent. For the most part, it seems to work, but when I asked a friend to test my branch on their Mac, the link wasn’t orange.

How it appears on Windows
How it appears on Mac

For the sake of the assignment, and with the approval of the collaborator, I created a pull request labeled “work-in-progress”. Though the semester is over, I really do want to see this issue through to the end.


The document is extremely outdated. This document explains how a component is created – what it extends – the hierarchy of the compoenents, and a glossary explaining each component’s function.

Most of the information in that document reflects what Brave was like 3 years ago! It has changed drastically in that time.

On the image to the left, you can see that there are only a small handful of components. A cross-section of what Brave was like in it’s early life. Back when every component was stored in the js directory.

Today, Brave has well over 100 components. Some components have been restructured and renamed as well.

3 months ago, a contributor updated the hierarchy to what you see on the left. However, the contributor erroneously thought that it meant the directory structure of Brave’s components. It’s actually a structure of how each component references another. So now, the hierarchy is a strange mix of an outdate component tree and its current directory tree.

I filed this issue myself, made corrections, and submitted a pull request.

Changes That Needed To Be Made

For starters, the very first line in the document states that all components extend ImmutibleComponent, which in turn extends React.Component.

This is no longer true. A quick look at many components shows that they extend React.Component directly:

So I changed this like so:

The hierachy needed a serious update. Some components like “App” has been changed to “Window”. I undid the changes made by the previous contributor which replaced “Main” (a component still in the program) with “Renderer” (a directory). Then I added every new component Brave uses. This added over 100 entries, totaling 180 items in the component hierarchy.

To give you an idea of how much has changed, see above how Main (or Renderer) directly uses 4 components. This is how many components Main uses now:

I added the new components to the glossary and explained them to the best of my ability.

by justosd at April 23, 2018 07:22 PM

Yalong Li

OSD Release 0.3 final post

When I try to fix this issue in debugger.html, I found some other bugs. Both issues are related to the "Set directory root" menu button on the left side panel of the debugger. They are not in the issues tab.

The first issue occurs when there is a webpack folder. When trying to set the sub directory to the root, the content of the folder will be missing, but when setting it up directly, the content is rendered. It happens because of the webpack has different url than the ordinary ones.

Issue 1 - STR:
  1. Go to
  2. On the left panel, right click on "Webpack" folder and Click on "Set directory root"
  3. Then right click on "app" foler and Click on "Set directory root" ( notice the content is missing ).

Issue 2 - STR:
  1. Go to
  2. On the left panel, right click on "" and Click on "Set directory root"
  3. Then expand the subfolders; right click on "libs" and Click on "Set directory root" ( notice the content is missing ).

                         issue 1:                                                                                    issue 2:
I fixed both issues and added test coverage to the code. It a learning process while debugging the issues. The pull request can be found here.


Another member of the devtools/debugger asked me to fix the issue but I did find where the source code for it. So, I went to David's office asking for help. He was knowledgable and experienced when trying to track down the bug. We spent about 15 minutes and found about where the bug was. Compared to me spending hours doing it alone, David saved me a bunch of time on debugging. Big thanks to him. 

So, I wrapped up the code and updated to my pull request on GitHub. 

by Yalong ( at April 23, 2018 07:17 PM

Joseph Pham

OSD600 – Final Release

For the final release, I decided to stick with Firefox Screenshots. I was still having issues with debugging the extension, so this time I took a different approach. I looked through the solved/closed issues to see if there was any mention of debugging or something that could possibly help me. I stumbled upon a closed issue that used Firefox Nightly to recreate the issue. Maybe if I tried recreating bugs with solved issues, it will help me  with my debugging issue. I felt like I was getting closer to being able to debug the extension, but again, no luck. With all of  these problems and no help from their documentation, I decided that I should document how to install the extension for Linux.

I remember when I first started on this project, it took me about 4 hours to install PostgreSQL and to get the server up and running. Now that I am familiar with PostgreSQL, it took me about 10 mins to uninstall, purge and reinstall it. The installation isn’t too difficult if you know what you are doing. I uninstalled and completely purged PostgreSQL from my laptop. I had to make sure that there was no trace of it left anywhere on my system. I had to kill the open ports, stop the services and then uninstall the program and all of its dependent packages. I reinstalled the database and ran the program. It worked! Now, I had to uninstall and purge again and start documenting.  I think I did this about 5 times before being confident enough to submit a pull request. This was their response:

Screenshot_2018-04-23 Download , Share and Delete buttons do not loose their highlight after each respective action is can[...]

What was frustrating about all of this is that their README only says “Install PostgreSQL”. There are a bunch of additional steps you need to do before getting the server up and running and to get the extension to run on localhost. If there were instructions initially, this would have saved me a lot of time (and tears). I had hoped that with my contribution, I could have help somebody with the installation and with no troubles. I understand where they are coming from, but they should have at least linked PostgresSQL’s instructions some where on their page.

For my second bug fix, I found a CSS issue that causes buttons to remain active, even after being clicked. 30171034-69584802-93f9-11e7-87a1-93d401518abe.gif

I found this bug quite simple to solve. Now that I am familiar with the code, I located the CSS file that contained the styling for the button. The issue was that the styling for hover and focus were the same. I separated the two attributes and removed the background-color for focus. I kept the border however, so that you can still distinguish if the button is in focus or not. I submitted the pull request with no issues this time and hopefully they accept my code change.

During the last 4 months, I gained first hand experience in the open source world. I was hesitant in the beginning, thinking that this would be to difficult for me. At some point, this was true, but I still tried my best with these assignments. For this last release, I felt like I learned how challenging yet rewarding open source projects can be. This was definitely a learning experience that I can carry forward throughout my career.


by jpham14 at April 23, 2018 07:03 PM

Aliaksandr Ushakou

A first glance at Open Standards

Software testing is very important for any project. It is important because people rely on stable and error-free products. Testing Open Standards like ECMAScript is even more important because every project that uses ECMAScript depends on it.

By the way, what is ECMAScript? ECMAScript is a scripting-language specification standardized by Ecma International in ECMA-262. It was created to standardize JavaScript, so as to foster multiple independent implementations. JavaScript has remained the best-known implementation of ECMAScript since the standard was first published, with other well-known implementations including JScript and ActionScript (Wikipedia).

JavaScript is one of the most popular programming languages lately. Many have heard about JavaScript, but not everyone knows that JavaScript is a trademark owned by Oracle. Using trademarks can lead to all sorts of problems, therefore, lots of developers use ‘ECMAScript’ name instead of ‘JavaScript’.

Ok, let’s take a look at the test itself. Here we can find steps for running these tests.

Usually I use Windows PowerShell and in most cases everything works fine. But this time something went wrong. When I used the following command  test262-harness test/**/*.js , tests started to run and everything seemed fine. Tests were running and running, and after waiting an hour my patience was over and I pressed “Ctrl + c” for stopping the tests.

It was clear that something was wrong, but I didn’t know what, and therefore, I decided to wait until testing is over. It took more than 12 hours and it ran 58797 tests.


Knowing that Windows sometimes has unexplainable issues, I tried to use Git Bash and it worked!


58797 tests on PowerShell vs 205 on Git Bash

After that, I had a look at the Array.prototype.reverse() tests. I chose the first test and studied it. After that I rewrote it using the  assert()  function. The result can be found here.

by aushakou at April 23, 2018 05:22 AM

Abdul Kabia

On the fileside of things

Hello there, reader, and welcome to this post. You should know me by now, if not, my name is Abdul Kabia! So the past couple of weeks I was tasked with finding a bug or issue on a Github repo and making a contribution to it. Now this is something I've done before, but this …

Continue reading On the fileside of things

by akkabia at April 23, 2018 04:21 AM

Matt Rajevski

SPO600 Project – Part 3 Reflection

After exploring the source code some more, I have come to the conclusion that this program has already been optimized to the fullest. Any improvement I can think of is already in place or doesn’t provide much of an improvement to the performance. The original 7Zip was initially released on July 18, 1999 so it has had plenty of time do develop before the Linux port was created. The last time the p7zip source code was updated was July 14, 2016 with the latest patch 16.02 adding a few bug fixes like a memory access violation fix, the sha1 function not working for certain situations.

I chose a file compression software for this project because it was really interesting how a program can take x amount of data, shrink it by 5%-40+%,  and still be able to decompress the data with it still being readable. This process is extremely complex because if the algorithm has even a 0.1% error then a 1Gb file could lose 1Mb of data and that could be part of an audio file that might be distorted, a video file missing a frame or two, or a program file that would cause the program to crash.

The process of optimizing this program was hard because the optimizations are already in place. The programs that need the most optimization are the ones that haven’t had much time in the open market. Over time bugs will be found and fixed, and performance improvements are made in the areas that need it the most. The great thing about the open source community working on improving a software is that there are potentially hundreds of people looking at the source code and one of them might notice an improvement that the others didn’t. This also takes some stress off of the original dev team so that they can put extra focus into adding features to the program.

When trying to initially benchmark the program using gprof, I had run into many issues trying to get it working. I had never used makefiles before, and after discovering how useful they can be I will now use them more frequently. The source files had included a massive list of makefiles that were designed for different CPU architectures and OS’s, and this included one to setup the program to be used with gprof. The program compiled fine, ran fine, but when trying to use the gmon.out file it would mention an ‘unexpected end of file’ which leads me to believe that something went wrong when creating the file. Luckly for me, the program had a built-in benchmark option. It didn’t provide the same results as the gprof program would’ve, but it did run multiple tests on each of the functions used in the compression/decompression components.

// Overall program benchmark //
[mrrajevski@aarchie p7zip_16.02]$ 7za b "-mm=*"

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs LE)

CPU Freq: 1998 1999 1999 1999 1999 1999 1999 1999 1999

RAM size: 16000 MB, # CPU hardware threads: 8
RAM usage: 1802 MB, # Benchmark threads: 8

Method       Speed Usage   R/U  Rating  E/U Effec
             KiB/s     %  MIPS    MIPS    %     %

CPU                  661  1999   13223
CPU                  624  1999   12478
CPU                  666  1999   13321  120   800

LZMA:x1      47086   651  2646  17213   159  1034
            149903   657  1859  12209   112   733
LZMA:x5:mt1  10144   659  1924  12673   116   761
            145518   658  1864  12272   112   737
LZMA:x5:mt2  10718   682  1963  13390   118   804
            142188   640  1873  11991   112   720
Deflate:x1  110402   642  2184  14018   131   842
            445994   621  2232  13858   134   832
Deflate:x5   40482   636  2449  15587   147   936
            448804   623  2235  13934   134   837
Deflate:x7   15434   662  2582  17101   155  1027
            475964   657  2247  14771   135   887
Deflate64:x5 38920   672  2503  16819   150  1010
            499315   697  2242  15621   135   938
BZip2:x1     23042   651  2140  13922   129   836
            109903   640  1862  11914   112   716
BZip2:x5     17371   660  2198  14498   132   871
             63509   661  1887  12466   113   749
BZip2:x5:mt2 17328   682  2119  14462   127   868
             62237   688  1776  12216   107   734
BZip2:x7      6172   678  2359  15991   142   960
             63919   653  1920  12535   115   753
PPMD:x1      15871   663  2475  16415   149   986
             12652   646  2308  14899   139   895
PPMD:x5       9558   665  2438  16200   146   973
              8063   652  2319  15111   139   907
Delta:4    2386459   642  2286  14662   137   881
           2062576   634  1998  12672   120   761
BCJ        3844213   673  2338  15746   140   946
           3679123   644  2341  15070   141   905
AES256CBC:1 478672   630  1867  11764   112   706
            488557   649  1850  12007   111   721

CRC32:1    1590921   653  1774  11582   107   696
CRC32:4    4273755   651  1466   9539    88   573
CRC32:8    6023965   646  1265   8168    76   491
CRC64      4182943   657  1303   8567    78   514
SHA256      829640   622  2720  16925   163  1016
SHA1       1175588   622  1770  11004   106   661

CPU                  596  1999  11913
Tot:                 656  2037  13350   122   802


The “biggest” optimization I did find was changing the -O flag to -O2, and this provided roughly a 0.5s improvement when compressing a 193Mib folder. The optimizations I did manage to find in the source code, despite them providing a ~0.01% improvement, were some of the optimizations we went over in class. The multiplication operation was the main focus of the optimizations I made. The functions were all calling a multiplication from within a loop and to fix that I managed to take them out of the loop or used a fixed value instead.

I never managed to find any worthwhile changes that I could push to the community to use in the next build of the project, but I did look into the process. The webpage found here contains all the source files, documentation, a support forum, and a ticket system used to track and push changes. Interestingly enough, there is an open ticket for patch 18.01 that was created on February 05, 2018. The ticket is currently empty, but that doesn’t mean that someone isn’t working on a bugfix/feature/optimization.

This class managed to shine light on areas of programming that I had never seen before. I had never dealt with assembly, and seeing the inside of a C program really showed the complexity of the deeper levels in programming. Another thing I had never used before was makefiles, as mentioned earlier. They allow you to manage programs that require multiple files and provide a simple way to build your programs in multiple ways. I’ve always tried to write my programs in a way that is optimal, but after taking this course I realized that the compiler will do most of it for me. I never got to fully experience what it was like to contribute to an open source project, but trying to made me have alot more respect for those who do contribute.

– Matt Rajevski


by mrrajevski at April 23, 2018 03:59 AM

Connor Ngo

Conclusion and some data

I wanted to get some before and after videos together but my recording software was outputting unusable files and I didn't have time to fix it. I did however have time to collect data about my optimization!


- Over a 10 second period of randomly duplicating circuits built by the users -

Before: ~2.2 bricks per second

After: ~20 bricks per second

That is a massive 909% increase in speed! We are now able to duplicate anything without any stutters or freezes in the program.

April 23, 2018 03:59 AM

Kelvin Cho

OSD600 – Release 0.3

So for our final release, I’ve decided to work on a bug on Brave which I previously have experiences working with.

In this release, I have picked this issue#12569 to work on.

So what is the bug?

Well, the bug that they have issued is that the audio indicator will still be on even if the video is over or stopped.

To start off I will explain two things that Brave uses, the first notable thing is that Brave has an audio icon on the tab like this:

The icon here isn’t very special, it has the same functionality as Firefox. The user can choose to mute or unmute the tab if they wish to do so. But another thing it does is when the user has too many tabs open it will switch from the icon to a blue bar to replace the icon as a way to indicate the user that there are currently sounds coming from here.

The bug that is currently happening is the blue bar audio indicator will still remain after the video has been posted or was done playing.


From the information that I have gathered after looking at the bug, it seems to be that if the user mutes the tab itself it will always display the blue bar indicator on top no matter what.

The reason why I believe this was the cause is that of once the tab is muted it doesn’t check if the video is completed or not.


As you can see the video is clearly over but the mute but the audio is still muted.

The Process of Bug Finding

So from the information that we know, the bug seems to be something that has to do with audio. The first thing I did was typed in:

git grep audio

As you can see we found a lot of results, so let’s just find what we think is useful.

The first thing that caught my eyes was a file name audioState.js, and another thing that caught my eyes was something called audioTabIcon.js.

So far we found two files that sound interesting and may or may not have to do with our bug.

The first file I looked at was the audioTabIcon.js. The js file doesn’t seem to have anything to do with the audio indicator.

So I moved onto the next file: audioState.js.

After looking at this for a couple of hours, I started to look at how the other variable is interacting with this javascript.

Fixing the bug

Interestingly the audioState.js doesn’t really interact with anything that causes it to change. Everything seems to be in this file, so my fix for this bug is to add a condition to check if the audio is muted or not.

If the audio is muted it will not be able to show the blue indicator as there shouldn’t be any sounds coming from that tab. After implementing the changes it seems that the bug has been fixed.

So I decided to write a test file to check if the audio indicator will still show or not.

This small test just checks if the audio is muted or not and check for the audio blue border. The problem is expected to return false as the bar should not be able to appear.

Now that we have added the changes we also are ready to make a PR. The PR request is here.

In conclusion, after finishing this bug, I felt it was very refreshing, I learned a lot more about Brave in general. Like I would not know that these were features that it has. Overall,  I think this was a very interesting bug to do.

by Kelvin Cho at April 23, 2018 03:59 AM

Ilkyu Song

SPO600 Project - Stage 3

I chose Redis (Remote Dictionary Server) for my project at stage1. Redis is open source software developed by Salvatore Sanfilippo, a volatile and persistent key-value store. Then, it stores and manages the data in the memory. Let's look at the benefits and data types of Redis.

1. The advantages of The Redis 

Advantage Description
Specialized for processing data in lists and arrays. • The value supports several data types such as string, list, set, sorted set, hash type.
• List type data entry and deletion are about 10 times faster than MySQL.
Redis transaction is also atomic. Atomic processing provides an Atomic processing function to prevent data mismatch when several processes simultaneously request the same key update.
Persistent data preservation while utilizing memory. • Do not delete data unless explicitly deleted with the command or set expires.
• The snapshot function allows you to save the contents of the memory as *.rdb file and restore it to that point in time.
Multiple server configurations. Consistent hashing or master-slave configuration

2. Redis provides five data types, and there are many processing instructions for each data type.
Data Type  Description
String • We cannot just store a string as a string,
• Binary data can also be saved (note that Redis does not have integer or real numbers).
• The maximum size of data that can be inserted into the key is 512 MB.
List • We can think of it as an array.
• The maximum number of elements in a key is 4,294,967,295.
• If the value of the data type is larger than the condition set in the configuration file, it is encoded as a linked list or zip list.
Set • There is no duplicate data in key with unaligned aggregate type
• The amount of time spent adding, removing, and checking existence is constant regardless of the number of elements in the sets
• The maximum number of elements that can be in a key is 4,294,967,295
Sorted sets • Sorted sets are called the most advanced Redis datatype.
• Adding, removing, and updating elements are done in a very fast way, which is proportional to the "number of elements in the log".
• It can be used in linking systems.
• Each element of sets has a real value called score and is sorted in ascending order by score value.
• There is no redundant data in the key, but the score value can be duplicated
Hashes • Similar to lists, consisting of a series of "field names" and "field values".
• The maximum number of field-value pairs that can be contained in a key is 4,294,967,295.

I compiled the benchmark file for the Redis benchmark by changing the compile options. And benchmarks were done on aarchie and x86 servers with different command counts. The result below is the number of commands executed per second. The aarchie server is a bit faster than the x86 server, although it did not show much difference from the test in stage1. However, the specifications of the two servers are so different that simple comparison is difficult. Some developers and architectures tend to look at performance only with code without considering hardware specs. However, the first stuff to consider when tuning database or optimizing code is the hardware specification.

 1. aarchie 

2. x86

Moreover, I ran the benchmark once again in stage2. I chose the Redis library source in stage 2 and benchmarked it. Then I used the ASM inline assembler to optimize the code. However, ASM does not guarantee optimization over C language. It is better to use c language first for optimization. The two figures below show the result of using the original source and ASM. The two results are very similar.

I am performing Stage3 and thinking about code optimization again. Code optimization is a program conversion technique that improves code by consuming fewer resources (ie, CPU, memory), resulting in faster machine code generation. I think I should remind this meaning. I tried to convert only the code to a simple knowledge what I knew for code optimization. I thought that converting only the code would speed up execution, and I thought that changing the compile options would speed up the program. However, in a simple program, the difference is not so different. I have to keep a few things in mind for code optimization. First of all, I need to know exactly the environment of the OS or platform where my program will run. (Actually, the library which I chose on Stage2 did not run on x86.) And I think I should have a knowledge of the specs of the machine on which my program will run. So, I need to provide the user with the minimum recommended specification for my program. Finally, you should benchmark it repeatedly over and over. To make a good program, I have to test it repeatedly many times. If I follow these three things, I will be able to develop a program that is nearest to optimization. As I proceeded with this course project, I was not only knowledgeable about code optimization, but also experienced. I think in programming as well as coding skills, experience is very important to programmers. This experience will be very beneficial to me. And this project taught me how to perform in the upstream. And code optimization and portability are not simply changing the programming code. I have to be knowledgeable about all operating systems, platforms and hardware.

by Ilkyu Song ( at April 23, 2018 03:38 AM

Ruihui Yan

Optimizing P7ZIP

Because of problems with HandBrake (since it uses many libraries and the FFmpeg), I have decided to tackle another project, a simpler one. I will be working on P7ZIP, which is a command-line version of 7-ZIP. It can be downloaded here.

After downloading the files, we build it by using make all_test :

And now it’s ready to be used. I am using the same file as before as the previous post and compress it to a .zip file.

Comparing the results from three distinct runs, the average runtime is 1 minute and 45 seconds. Here is an example of a run:

Run 1: 1m48.808s

Run 2: 1m45.529s

Run 3: 1m43.264s

Analyzing the source code, I found out that the default optimization flag for gcc is -O, in which “the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.” (Source). Therefore, my plan was to increase the optimization option and write down the results:




To my surprise, not only the higher levels of optimization didn’t make difference, it actually made the runtime longer.

So I tried disabling optimization altogether, by using the flag -O0.

Here is the results:

Run 1: 1m40.043s

Run 2: 1m38.414s

Run 3: 1m41.231s

Surprisingly, removing all the optimizations actually made the program run better. It went down from 1m45s to 1m40s, around 5% faster.

And that concludes this part of optimization.

by blacker at April 23, 2018 03:24 AM

Ray Gervais

Removing the Excess Years from Angular Material’s DatePicker

An OSD700 Contribution Update

So here we are, potentially the last contribution to occur for OSD700 from this developer before the semester ends and marks are finalized. No pressure.

For this round, I wanted to tackle a feature request which I thought would be beneficial for those who utilize the date picker component (a common UI element). The concept is to dynamically remove and add years to the overall date picker based on the min and max date configurations. Sounds rather simple, right? I thought so, but I also had to admit my lack of experience working with the code which dynamically generated the calendar and years portion to this degree before. The inner workings are vastly complex and data driven, which in itself is an interesting design.

The process so far while working on this has been an off and on “hey I get this”, and “I have no idea what to do with the current concepts”. You can see throughout my work in progress the various off and on’s when it comes to understanding, implementing and asking for advice / suggestions which gets us to where we are now. Currently, as I’m writing this, with the help of mmalerba and WizardPC, I have the dynamic year portion working as desired; some artifacts needed to be addressed such as the displayed year range in the header needed to be updated, the years-per-page seem to overlap on the final year if over 24 years gap between min and max, and a potential ‘today’ variable which isn’t always the current date.

There have been many revisions to the code base that I’ve been playing in, often rearranging logic and algorithms to accommodate the four edge cases which are:
1. With no Min / Max provided: the Multi-Year Date Picker behaves as current implementation
2. Only min date provided: Year offset is set to 0, making the min-year the first entry
3. Only max date provided: Year offset is set to a calculated index which equates to max-year being the last entry
4. Both min and max provided: Follows same logic as case 3.

The process of making the first edge and second edge case were relatively painless, this in part also due to the advice and comments left prior to me even writing my first line for this feature set. I’ve included below this that revision and various revisions I had attempted (skipping over the minor changesets) until I finally had the working version a few days later. You can see the progress in my WIP pull request here.

Revision #1 (Min Date Working as Expected)

this._todayYear = this._dateAdapter.getYear(;
    let activeYear = this._dateAdapter.getYear(this._activeDate);

    // Default Behavior for Offset
    let activeOffset = activeYear % yearsPerPage;

    if (!this._maxDate && this._minDate) {
      activeOffset = 0;

// Whole bunch of wrong logic

After I clarified that this was indeed what we wanted for the second use case (min provided), now came the harder algorithmic portion for use case 3 and 4. What I’m working around looks like the following:

Revision #2 (A lot closer to expected logic)

this._todayYear = this._dateAdapter.getYear(;
    let activeYear = this._dateAdapter.getYear(this._activeDate);

    // Default Behavior for Offset
    let activeOffset = activeYear % yearsPerPage;

    if (!this._maxDate && this._minDate) {
      activeOffset = 0;

    if (this._maxDate) {
      const maxYear = this._dateAdapter.getYear(this._maxDate);
      // Keep number positive
      const yearOffset = (activeYear - maxYear) >= 0
        ? activeYear - maxYear
        : (activeYear - maxYear)  * -1;

      // Determine how far to push offset so that max year is at end of page
      const currentYearOffsetFromEnd =  (yearsPerPage / yearOffset) + 1;
      activeOffset = this._minDate ? 0 : currentYearOffsetFromEnd;

The snippet below was the logic which should be followed, at first I thought nothing of it, but I realized that (yearOffset – Math.floor(yearOffset) would 100% return 0.

Revision #3 (Snippet)

const yearOffset = (maxYear - activeYear) / yearsPerPage;
        const currentYearOffsetFromEnd = (yearOffset - Math.floor(yearOffset)) * yearsPerPage;
        const currentYearOffsetFromStart = yearsPerPage - 1 - currentYearOffsetFromEnd;
        // Determine how far to push offset so that max year is at end of page
        // const currentYearOffsetFromEnd =  Math.floor((yearsPerPage / yearOffset)) + 1;
        activeOffset = this._minDate ? currentYearOffsetFromStart : currentYearOffsetFromEnd;

Final Working (Pre Syntax Cleanup)

this._todayYear = this._dateAdapter.getYear(;
    let activeYear = this._dateAdapter.getYear(this._activeDate);

    // Default Behavior for Offset
    let activeOffset = activeYear % yearsPerPage;

    if (!this._maxDate && this._minDate) {
      activeOffset = 0;

    if (this._maxDate) {
      const maxYear = this._dateAdapter.getYear(this._maxDate);

        const yearOffset = (maxYear - activeYear) / yearsPerPage;
        const currentYearOffsetFromEnd = (yearOffset - Math.floor(yearOffset)) * yearsPerPage;
        const currentYearOffsetFromStart = yearsPerPage - 1 - currentYearOffsetFromEnd;

        activeOffset = this._minDate
          ? currentYearOffsetFromStart
          : (24 % currentYearOffsetFromEnd) - 1;

    this._years = [];
    for (let i = 0, row: number[] = []; i < yearsPerPage; i++) {
      row.push(activeYear - activeOffset + i);
      if (row.length == yearsPerRow) {
        this._years.push( => this._createCellForYear(year)));
        row = [];

Words cannot describe the waves of frustrated “this will never work” monologues and “this is progress” relived exhales occurred during the past week while working on this feature, nor can words describe the amount of dancing-while-no-one-is-around that I did when I finally reached the current implementation. Based on the use cases mentioned above, here is a visual for each:

Case 1: No Min / Max Date Provided

Case 1: Min Date Provided

Case 1: Max Date Provided

Case 1: Both Min / Max Date Provided

I simply cannot explain the thought process which came to the final conclusion, more so I am able to explain the biggest flaw I had in my own thinking. I over thought quite a bit, and more so became overwhelmed with the thought that I would not complete this or the code base was too complex (I will, it’s not). I suppose the time of day I typically worked on this bug didn’t cater well to the mentality while approaching the code, nor my mindset of ‘one more item due’. Once I took the weekend to correct that, and to slowly relearn the task required and the changes (instead of breaking the scope into much bigger unmanageable portions in attempt to ‘get it done’), thoughts and attempts became much clearer.

Whats left? Well, at the time of writing this post I still have to fix the headers, isolate, identify and fix any edge cases which the algorithm doesn’t take into account, and also clean up the code of any useless commented out code. I believe that it can be done, and after the progress today I can happily say that I’m more optimistic than I was on Friday to complete this feature request. I’ve loved contributing, learning what I can through toil and success and also feeling the “I can accomplish” anything high when the pieces finally click. Once I settle down in my new role, I hope to keep contributing both to Angular Material, and new projects which span different disiplines and interests.

by RayGervais at April 23, 2018 03:20 AM

Sanjit Pushpaseelan

SPO600 Stage 3- Final blog

This will be my final blog post for this project. As I mentioned in yesterday’s post, I will be recapping the work I’ve done over the past month and a half and analysising my results.
First off, I would like to start with going over the basic benchmarking I did with MD5deep. I was using a txt file that was 2.4gb large.

real(s) user(s) sys(s)
9.367 8.151 1.450
9.331 8.189 1.383
9.294 8.300 1.230
9.364 8.378 1.225
9.255 8.365 1.226
9.318 8.311 1.239
9.380 8.271 1.334
9.339 8.356 1.218
9.278 8.334 1.167
9.338 8.448 1.125

The average runtime for MD5deep was about 9.3 seconds with a 2.4gb file. Please keep this in mind while I talk about the optimizations I made throughout this project.

Exploring build flags

I started off with trying to mess with the build flags to see if I can improve the runtime of the program. I first noticed that the makefile was O2 to build the program. I decided to try and build with O3 and OFast. My first issue was locating the right makefile. I never realized it at first, but MD5Deep actually had 2 makefiles. The one I needed to edit was in the src folder where all the elf files where stored. I got some interesting results when I finally made the changes though. (If you want to see the original blog post, click here)


real(s) user(s) sys(s)
9.294 8.243 1.278
9.319 8.188 1.370
9.326 8.046 1.370
9.391 8.049 1.512
9.336 8.279 1.280
9.324 8.362 1.196
9.328 8.256 1.295
9.284 8.145 1.376
9.284 8.278 1.238
9.381 8.285 1.333


real(s) user(s) sys(s)
9.435 8.269 1.403
9.391 8.302 1.321
9.304 8.322 1.212
9.328 8.318 1.239
9.288 8.237 1.282
9.297 8.237 1.282
9.297 8.268 1.250
9.265 8.451 1.040
9.302 8.190 1.354
9.287 8.214 1.301

As you can see, my results are pretty close to the original build run. While being a bit lower than the original build, I believe I can chalk that up to variance since I only ran 10 tests. Even then, the difference is so negilible it is not worth mentioning. The question to ask is why does O3 and Ofast not work. This required me to do some more research into these compiler flags. After doing so, I learned that O3 and OFast aren’t guaranteed to actually improve the run times of your code! -O3 turns on the following options along with whatever O2 turns on:


Ofast turns on all of O3 options along with the following:



Just because these flags are turned on, doesn’t particularly mean the flags do anything. I wasn’t able to figure out what all these flags  do but it is clear that these flags clearly have no effect on the build. Something I did find is that O3 uses a cmov which causes it to lengthen the loop dependency chain so it can include said cmov. This might have actually caused my build to run slower which would have been an interesting result.

Changing the code

The 2nd thing I tried to do is try and change the code. The first thing I did was try and inline a function which might have reduced the runtime. You can find the original blog post here.

real(s) user(s) sys(s)
9.266 8.394 1.097
9.372 8.316 1.294
9.289 8.347 1.172
9.285 8.308 1.213
9.334 8.274 1.293
9.350 8.265 1.315
9.313 8.251 1.297
9.264 8.337 1.318
9.286 8.267 1.254
9.319 8.255 1.276

Once again, my runtime was pretty similar to my original build. This was pretty simple to explain, the cost of calling the function wasn’t nearly as expensive as I thought it would be meaning that the improvement was negligible.

My 2nd attempt to improve the code was to change some code to remove some loop-varient variables.

(Below you can see the changed code and the original code)

Now what made me curious is that this code crashed when I ran my 2.4gb file but it didn’t crash when I ran smaller files. I was never able to figure out why it didn’t work. I also wasn’t able to find out the cut-off point for file size either. Hopefully I can work on this issue more after I finish up my school work but for now, I have reached a dead end with this problem.

Implementing Assembly Language

My final attempt was to implement assembly language into MD5Deep. You can find the original post here.

#define DISPLAY(x,w) ( __asm__(“ror %%cl,%0″:”=r” (x):”0″ (x),”c” (32- n));)

Above is the code I used to try and improve the runtime.

real(s) user(s) sys(s)
9.278 8.334 1.167
9.338 8.448 1.125
9.303 8.295 1.237
9.342 8.329 1.243
9.308 8.357 1.179
9.318 8.303 1.255
9.366 8.394 1.203
9.327 8.326 1.244
9.367 8.151 1.450
9.331 8.189 1.383

Once again, these runtimes are similar to my original runtimes. I haven’t found a definite answer to why this is the case but I believe it is just due to the fact that the manipulation I am doing is not nearly enough to have an effect on the code. It is a similar case like the function inlining where the improvement I made is negligible that it is not worth mentioning.


Final reflection

Despite my results, I had fun working on this project(when i had the time). I learned a bit about the behavior of compile flags and learned that sometimes it just isn’t worth the effort to try and optimize code. Sadly, my efforts were in vain and I was unable to make any significant results out of my time.

by sanjitps at April 23, 2018 02:47 AM

Svitlana Galianova

It's only a beginning

What have I learned for the past 8 months of Open Source Programming?

I was convinced from the very beginning that this course would be important for my career, experience and learning process. But at the same time I was scared: what can I do for a large project in a hype technology company? Am I smart enough? Do I have enough knowledge?

The first thing that I learned is how to feel confident. Before this course when I was getting a new task/assignment either in school or at work, I was stressing out. But now the first thought that pops into my head is "I can do it". Open Source taught me that I didn't have to know everything. In fact nobody can know everything, it is just impossible. The question really is where can I find needed resources? With Google and Stackoverflow (God bless the person who had that amazing idea to connect the community) there is nothing to stress about, everything can be found online.

Another thing that I have learned: programmer is not a person who sits somewhere alone an just comes up with brilliant ideas, it's a community where ideas from hundreds of people are combined and strong software is built. You are not supposed to be scared to ask for help if you need it. Sometimes that one small push/line/idea would start the thought process and another idea would be born.

All those conclusions sound so obvious, but it's so hard to actually believe in that and experience that happiness of being connected with other people or an opportunity to ask for help, if you need it.
I am grateful to Open Source course in Seneca, that showed me the other way software may be built, the way how to stand out from the crowd, how to keep up with new trends in technology and how to be a part of a massive community around the world. Sometimes open source leads to getting a job and it always leads to getting a valuable experience and brushing up your skills. Your Github profile is a real time resume and it really shows that programming is your passion or at least something you enjoy to do in your free time.

I am amazed by how much my personality was changed and how I became more confident as a programmer. I am also not that anxious when I don't have an answer for another question from my manager.

It's not the end of my Open Source "career" for sure, I will still contribute to Mozilla Addons-Frontend project since I enjoy it so much. I think that open source is a great place to maintain your current skills and gain new ones. I am happy that I had the opportunity to learn that Github is so much more than just a version control tool! It is such a trill to see another email notification from the Github, I suddenly am important.

by svitlana.galianova ( at April 23, 2018 01:44 AM

Patrick Godbout

DPS909 Release 03: My Second Open Source Contribution

So after a lengthy but rewarding experience with the completion of my pull request for my first open source contribution, we were now faced with building on this experience by tackling on Release0.3, which involved taking a step up from what we had already accomplished. To do this, I've decided to take on what I considered to be a more difficult bug for the brave browser. 

The bug i've chosen can be found here:

Introduction; Explaining the Bug, Exploring Known Territory

The bug this time around involves the back and forward navigation buttons, more precisely, what happens when you press and hold one of those buttons for a long time. Here's a visual representation of the bug;

When pressing on the navigation buttons and holding the mouse click button, a dropdown list appears with the list of previously visited websites if you clicked and held back, and a list of previously navigated websites if you clicked and held forward. The bug happens when you release the click, and then click the same button again once to perform either the forward or back action. This will accomplish the action, but will keep the dropdown list alive and displayed on your screen, which is un-intuitive given that the list doesn't update after the second action of you clicking back or forward is performed. Therefore, this is a bug, and needs to be fixed.

This became familiar territory as I recalled my previous experience with Release0.2 and dealing with the Brave browser history and its components. This was a huge help at first as it gave me many leads on finding the source of this particular problem. Without too much effort, i've narrowed down the interesting parts of code involving these controls to these following files; 


Following this is a look into what's interesting in each of those files, including what i've added or modified.

Problem Solving Approach; How to Fix this Bug

The first thing i've done when approaching the problem was understanding how to fix it. In my release0.2, I had not been thorough enough in doing this step. I've looked into other browsers and tried replicating the bug in both Chrome and FireFox. They both did what I expected the behavior should be; when the menu shows up following a long press of the back or forward button, you can click the back or forward button again a single time, and the dropdown list goes away. This became the goal I was aiming for.

I've looked into the call stack of the events that happen when reproducing the bug, and that's what allowed me to narrow down the list of important files to the list mentioned above. Without further ado, here's a more in-depth look at each file;

File: app\common\state\contextMenuState.js

Now, the GitHub issue for this bug includes a comment that mentions a previous fix to a different issue was similar to what needs to be done for this problem. That fix included dealing with the navigation on the hamburger menu of the browser, which is included in the picture linked here. Generally speaking, this part of the code helps to set the context menu details in the case of the hamburger menu, so that when hovering with the mouse over any of the components of the menu, the menu knows to switch from one component to the other. Since we're looking for similar behavior here with the back and forward button, (with the only difference being in the way it is toggled), I have added the similar pieces of code with a new typing variable, (onBackLongPressMenuWasOpen), to indicate whether or not the menu should be displayed below the forward and back buttons.

File: app\renderer\components\navigation\buttons\backButton.js

This file was interesting because it deals with the class that constitutes the back and forward buttons. The methods that are called and should be needed for this fix are within this class as well. If you notice, the onBack method checks with multiple checks whether the previous visited tab is navigable or not. If it is, it clones the tab, renders it active. If not, it makes the current tab active which to the eyes of the user, does nothing. Now the onBackLongPress finds the parent node which helps it identify which component includes the dropdown list or whatever child component it may have. Storing it in the rect variable, it then passes its position coordinates to the onGoBackLong method which handles the action of displaying the menu. This particular method is a part of the next file we will be looking at;

File: app\renderer\reducers\contextMenuReducer.js

For the sake of explanation, this onLongBackHistory and onLongForwardHistory both have this similar structure, the only difference being how they retrieve history objects. Now, an outer if check was added with a type tag attached to the action, to try and prevent the creation of the submenu which happens within the second-most outer if check (line 541).

Once the code detects that there is a history to be shown by the submenu we know of, it creates a menuTemplate which will be sent somewhere else for displaying purposes. This is the part of the code we want to mess around with since we don't always want the menu to be created. To check out overwriting possibilities with the menu, the addition of the tag and detection of the tag within this code prevents the creation of the menu, therefore defaulting it to an empty template which you can find in the else statement on line 587. 

If you notice, line 584 includes the type being set to the typing we declared in the first file mentioned above. This is the tag that allows the setting of the contextmenu to know the difference. On the other hand, when we don't fall into this situation from the if check, line 590 takes care of toggling the type from what we've declared to a 'false', therefore allowing the creation of the submenu once the code hits this method once more.

Examining more Files, and Discussing the Solution

File: app\browser\reducers\tabsReducer.js

When it comes to this file, the APP_ON_GO_BACK and APP_ON_GO_BACK_LONG as well as their FORWARD counterparts are the sections of code being called and methods being deployed once the handling of the back and forward submenus come into play. This file is important because the way the information gets sent to the goBack and onLongBackHistory methods from the files seen above can dictate how the display of these components should be handled before it gets there. 

Discussing the Solution

The solution has not been completed as of now (2018-04-22); there are key steps missing in these files, however the locations where logic needs to be added have been pinpointed as of now. Generally speaking, if the toggling mechanism works and lets the code know how to initialize the components under the forward and back buttons, the rest will follow.

Closing Words

This will be my last post in regards to my Open Source Development class, and as such, I would like to take this chance to speak about how valuable an experience this has been. As a programmer, having had the chance to touch so many concepts of open source development within the short timeframe that is a semester, it has simply been very enriching. 

In regards to our teacher, David Humphrey, we've had a very knowledgeable information bank at our disposal as the course progressed, and will remain one of the important factors that let me expand my knowledge of Open Source development.

In regards to our release assignments which often included pull requests and contributions to open source projects as opposed to our laboratory experiments, they will continue to be hard fact proofs of all of our progress in the class within the open source world. Being able to say we've contributed to major projects is an achievement that will last a long time.

Finally, in regards to the classmates and the way the course was built, the experienced was enhanced further in the form of contributing to our own projects as a base to build onto. These structures are what led us to the possibility of contributing to major projects, and eased our way there as much as they should.

I hope to continue blogging someday about programming as it is a passion and career, and a good reminder and help to others who share that as a career.

by Patrick G ( at April 23, 2018 01:43 AM

Hao Chen

Default behavior

This week in Open Source, I will be tackling this issue.

The cursor position within the QuickOpenInput jumps around when the user is trying to select from the listed options with UP and DOWN arrow keys. Not so convenient if you wish to add more characters to the input.

To start, I want to see what function is being called whenever the UP or DOWN arrow key is pressed. To do this, I set a breakpoint within the HTML page to pause on subtree modification.

I wasn’t able to find a specific function that alters the state of the cursor position. Within the DOM, the input field has two property I wish to observe; selectionStart and selectionEnd. I googled around and found out that the issue of cursor jumping to the beginning and end of an input is actually an intended behavior.

So I tested this out to confirm:

So how do I prevent the default behavior of an event?


Below is a definition of what this function should do according to W3Schools:

Added the call within the ArrowUp and ArrowDown event.


Default behavior was originally published in Haorc on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Hao Chen at April 23, 2018 01:39 AM

Adam Kolodko


The overnight tests of blender showed that the the ‘-finline-functions’ flag was what was causing the ‘perlin()’ function to improve by a factor of twelve. The hope was that an optimizer flag can improve the function without negative effects on the other functions. Unfortunately this was not the case, the -finline-functions flag may have improved performance on the one function but every other function has increased in run time.

Below is the gprof of the finline-functions, the comparison is based the results posted previously.

% cumulative
time seconds name
44.26 1461.83 ccl::QBVH_bvh_intersect_hair(ccl::KernelGlobals*, ccl::Ray const*, ccl::Intersection*, unsigned int, unsigned int*, float, float)
13.56 1909.60 ccl::noise_turbulence(ccl::float3, float, int) [clone .constprop.197]
7.54 2158.54 ccl::QBVH_bvh_intersect_shadow_all_hair(ccl::KernelGlobals*, ccl::Ray const*, ccl::Intersection*, unsigned int, unsigned int, unsigned int*)
7.20 2396.40 GaussianYBlurOperation::executePixel(float*, int, int, void*)
3.56 2513.84 ccl::svm_eval_nodes(ccl::KernelGlobals*, ccl::ShaderData*, ccl::PathState*, ccl::ShaderType, int)
3.05 2614.52 ccl::kernel_path_trace(ccl::KernelGlobals*, float*, int, int, int, int, int)
2.06 2682.40 ccl::shader_setup_from_ray(ccl::KernelGlobals*, ccl::ShaderData*, ccl::Intersection const*, ccl::Ray const*)
1.88 2744.62 ccl::light_sample(ccl::KernelGlobals*, float, float, float, ccl::float3, int, ccl::LightSample*)
1.85 2805.79 ccl::kernel_path_surface_bounce(ccl::KernelGlobals*, ccl::ShaderData*, ccl::float3*, ccl::PathState*, ccl::PathRadianceState*, ccl::Ray*)
1.58 2858.14 GaussianXBlurOperation::executePixel(float*, int, int, void*)
1.03 2892.22 ccl::primitive_tangent(ccl::KernelGlobals*, ccl::ShaderData*)
0.91 2922.42 svbvh_node_stack_raycast(SVBVHNode*, Isect*)
0.91 2952.52 ccl::perlin(float, float, float)

Something to notice, my worry about the optimization causing another function called ‘microfacet_beckmann()’ to be called in place or ‘perlin’ was unfounded.

Another thing to notice is that every other call has increased runtime. This may mean we want to isolate this function and simply inline it on it’s own.

Let’s take a look at this function.

#ifndef __KERNEL_SSE2__
ccl_device_noinline float perlin(float x, float y, float z)
int X; float fx = floorfrac(x, &X);
int Y; float fy = floorfrac(y, &Y);
int Z; float fz = floorfrac(z, &Z);

float u = fade(fx);
float v = fade(fy);
float w = fade(fz);

float result;

result = nerp (w, nerp (v, nerp (u, grad (hash (X , Y , Z ), fx , fy , fz ),
grad (hash (X+1, Y , Z ), fx-1.0f, fy , fz )),
nerp (u, grad (hash (X , Y+1, Z ), fx , fy-1.0f, fz ),
grad (hash (X+1, Y+1, Z ), fx-1.0f, fy-1.0f, fz ))),
nerp (v, nerp (u, grad (hash (X , Y , Z+1), fx , fy , fz-1.0f ),
grad (hash (X+1, Y , Z+1), fx-1.0f, fy , fz-1.0f )),
nerp (u, grad (hash (X , Y+1, Z+1), fx , fy-1.0f, fz-1.0f ),
grad (hash (X+1, Y+1, Z+1), fx-1.0f, fy-1.0f, fz-1.0f ))));
float r = scale3(result);

/* can happen for big coordinates, things even out to 0.0 then anyway */
return (isfinite(r))? r: 0.0f;
ccl_device_noinline float perlin(float x, float y, float z)
ssef xyz = ssef(x, y, z, 0.0f);
ssei XYZ;

ssef fxyz = floorfrac_sse(xyz, &XYZ);

ssef uvw = fade_sse(&fxyz);
ssef u = shuffle(uvw), v = shuffle(uvw), w = shuffle(uvw);

ssei XYZ_ofc = XYZ + ssei(1);
ssei vdy = shuffle(XYZ, XYZ_ofc); // +0, +0, +1, +1
ssei vdz = shuffle(shuffle(XYZ, XYZ_ofc)); // +0, +1, +0, +1

ssei h1 = hash_sse(shuffle(XYZ), vdy, vdz); // hash directions 000, 001, 010, 011
ssei h2 = hash_sse(shuffle(XYZ_ofc), vdy, vdz); // hash directions 100, 101, 110, 111

ssef fxyz_ofc = fxyz - ssef(1.0f);
ssef vfy = shuffle(fxyz, fxyz_ofc);
ssef vfz = shuffle(shuffle(fxyz, fxyz_ofc));

ssef g1 = grad_sse(h1, shuffle(fxyz), vfy, vfz);
ssef g2 = grad_sse(h2, shuffle(fxyz_ofc), vfy, vfz);
ssef n1 = nerp_sse(u, g1, g2);

ssef n1_half = shuffle(n1); // extract 2 floats to a separate vector
ssef n2 = nerp_sse(v, n1, n1_half); // process nerp([a, b, _, _], [c, d, _, _]) -> [a', b', _, _]

ssef n2_second = shuffle(n2); // extract b to a separate vector
ssef result = nerp_sse(w, n2, n2_second); // process nerp([a', _, _, _], [b', _, _, _]) -> [a'', _, _, _]

ssef r = scale3_sse(result);

ssef infmask = cast(ssei(0x7f800000));
ssef rinfmask = ((r & infmask) == infmask).m128; // 0xffffffff if r is inf/-inf/nan else 0
ssef rfinite = andnot(rinfmask, r); // 0 if r is inf/-inf/nan else r
return extract(rfinite);

You can see that this function is divided into a SIMD and non SIMD versions, because this build is X_86 I will assume that it compiled as the SIMD version.

For some reason this function is the no-inline declaration, Im unsure of why this might be the case and if I had the time I would rebuild Blender with only perlin as an inline function.

Unfortunately that would be out of scope as Im just testing optimizer flags in this project. Through sheer brute force it is clear that individual optimization flags aren’t the way to improve performance.

Below is a table of each optimization flag and it’s corresponding effect on blender’s runtime.

Flag  Runtime/Seconds

-O2                             3245.36

-fvect-cost-model               3242.05
-floop-unroll-and-jam           3247.36
-ftree-partial-pre              3247.84
-ftree-loop-distribute-patterns 3251.57
-fsplit-paths                   3252.05
-floop-interchange              3255.06
-ftree-slp-vectorize            3255.77
-ftree-loop-vectorize           3260.45
-fpredictive-commoning          3266.47
-fgcse-after-reload             3275.78
-ftree-loop-distribution        3288.16
-fpeel-loops                    3283.03
-fipa-cp-clone                  3283.68  
-finline-functions              3303.00
-funswitch-loops                3306.21 

-O3                             3350.36

by ahkol at April 23, 2018 01:31 AM

Svitlana Galianova

Release 0.6: More contributions

So it's been another two weeks already.
I am still on the "honeymoon" phase with Mozilla Addons-frontend project. There are always bugs for me to fix. 

My strategy didn't change that much from the previous release:

1) find a bug
2) reproduce a bug
3) find a quickest fix modifying state of the properties right in Google Developer tools
4) try to make the fix more specific for the needed area of code
5) improve my fix, remove redundant code
6) submit PR

The first bug, I have fixed. was caused by my previous PR, so I felt like it was my responsibility to fix it. It was about putting the right icon for the right error message. Originally the green message had an exclamation mark as an icon, but I have changed it to be Mozilla Firefox icon. My fix affected a red message as well, so I have wrote a different SCSS class to handle green message.
screen shot 2018-04-19 at 12 43 59 pmscreen shot 2018-04-19 at 12 45 14 pm
I was going through the list of bugs and I saw a few similar with unbroken strings, it means that when the string is too long, it's not nicely cropped with ellipsis at the end, but it continues to live trough the next containers and only ends on the edge of the browser(as the url on the picture):


I have not done anything to handle this situation before and I feel like it's something all programmers who touch front-end should know. So I decided to dig in. I have found an issue, solution and submitted another PR

So I learned how to break unbroken strings:
display: block;
overflow: hidden;
text-overflow: ellipsis;

I came across another similar bug and decided to contribute there as well.
So if you are Mozilla Firefox user and you are interested in Addons, the homepage will not be overwhelmed by overflowing strings:

screen shot 2018-04-20 at 11 22 06 pm

For now any of my pull requests are not merged and are pending to get reviewed probably on Monday.

by svitlana.galianova ( at April 23, 2018 01:18 AM

Adam Kolodko

A build for every flag

To find what specific optimization flag affected the ‘prelin()’ function. This was does through brute force, 15 different builds were made using the 15 different O3 flags on top of a normal ‘-pg -O2’ version of Blender.


Bellow is what it looks like to run 15 concurrent building processes.

Screenshot from 2018-04-21 17-01-18

After the build process was finished a few hours later, I wrote a simple bash script to run the 15 tests and gpof each one. This testing process will take about 7 hours as the image render takes 25 minutes each.

This is a sample of the script


cd ~/blender-git/ftreeDP/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeDP -f 1
gprof ./blender > testFtreeDP

cd ~/blender-git/ftreeS/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeS -f 1
gprof ./blender > testFtreeS.txt

cd ~/blender-git/fsplit/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/fsplit -f 1
gprof ./blender > testFsplit.txt

cd ~/blender-git/floopI/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/floopI -f 1
gprof ./blender > testFloopI.txt

cd ~/blender-git/floopU/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/floopU -f 1
gprof ./blender > testFloopU.txt

cd ~/blender-git/ftreeD/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeD -f 1
gprof ./blender > testFtree.txt

cd ~/blender-git/finline/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/finline -f 1
gprof ./blender > testFinline.txt

cd ~/blender-git/ftreeV/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeV -f 1
gprof ./blender > testFtreeV.txt

cd ~/blender-git/fpredictive/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/fpredictive -f 1
gprof ./blender > testFpredictive.txt

cd ~/blender-git/fgcse/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/fgcse -f 1
gprof ./blender > testFgcse.txt

cd ~/blender-git/funswitch/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/funswitch -f 1
gprof ./blender > testfunswitch.txt

The logic is as follows

cd //move to home directory to prevent rerunning test in case of invalid directory
cd ~/blender-git/funswitch/bin/ //move to directory meant for flag
./blender -b ~/blend_cat/fishy_cat.blend -o ~/funswitch -f 1 //preform the render in directory, -b is for console, ~/blend_cat/fishy_cat.blend is the file to render, -o ~/funswitch is the output directory for the image, -f 1 means frame 1 as if it was an animation with one image
gprof ./blender > testfunswitch.txt //create gprof text file to examine test results

Next blog post I will evaluate the results and attempt to understand the reason for the changes.

by ahkol at April 23, 2018 01:07 AM

Vimal Raghubir

Fixing Bugs in Kubernetes Website

So in this week’s open source adventures, I decided to tackle some bugs in the Kubernetes Website GitHub repository. This repository can be accessed here. So before I begin discussing the bug fixes that I made in this repository, I would like to highlight the reason I chose this repository specifically. If you have read one of my previous blog posts titled, “Kubernetes”, you will have already known that I am deeply fascinated in Kubernetes as well as other DevOps platforms. Due to this reason, I decided to tackle some bug fixes in Kubernetes regardless of what bug or what language it is in.

After exploring several repositories in Kubernetes, I noticed a trend of having several bugs that required intermediate to expert knowledge of the software/language in order to fix the bug, including the beginner bugs. I have started learning GO, which is the language used behind majority of Kubernetes architecture, as well as experimenting with the application on my own. What I came to realize is its an ongoing learning experience to which I need to build a foundation before I REALLY start to tackle bugs in the application.

Until that time comes, I decided to tackle simpler bugs such as in their website that can be seen here. The first bug fix that I made was to change some documentation that was necessary since it was causing confusion for other developers. My Pull Request can be accessed here. What the bug basically was is that the documentation previously stated that you would need to use the NodePort value provided by accessing your Service’s details, which is incorrect. As stated by the Dick Murray, a contributor to this repository, the NodePort value does not work in conjunction with your external IP address but instead the Port value does.

So the recommended fix was to change the documentation to reflect this. Below is the change on GitHub.

And below is the change on the actual website.

Onto the second bug fix! So for this bug fix, there was some header text that was hiding behind the main header and was only visible on the Safari browser. Although it could only be seen in the Safari browser, the text could still be searched on all browsers as shown below on Safari.

If you cannot make out the header text clearly from this picture, it can be seen clearer in the screenshot below.

Although it isn’t necessarily bothering anyone, it is still a bug that requires cleaning up. So I first had to pinpoint the exact file that this code exists in, but thankfully this was done for me by Zachary Sarah. So the recommendation by Zachary, was to simply remove this navigation bar since it isn’t useful anyways. I made my Pull Request and it has successfully been merged. The changes can be seen below.

This was a fix that was gladly accepted by the developers in this repository as indicated by Brad Topol.

In conclusion, I am absolutely excited to have finally dipped by toes in the water of Kubernetes after several weeks of my open source adventures. This is definitely only the beginning and I am ecstatic to continue fixing bugs in this repository as well as others that Kubernetes has. Not only is the idea of fixing bugs rewarding, but I finally feel like I am making an impact with my developing compared to the other school projects I’ve done. I am beyond thankful for all the lessons I have obtained from my open source professor David Humphrey, and will not put his teachings to waste! Open Source has thought me to think bigger than ever before and that is something that everyone is seeking in all aspects of life.

Once again I do look forward to many more open source adventures and this is the mark of many more to come! See you in my next adventure!

by Vimal Raghubir at April 23, 2018 12:35 AM

Jeffrey Espiritu

SPO600 Project – Stage 3 / Part 3

Inline Assembly Additions I changed the for loop in the FLAC__fixed_compute_best_predictor function from this: to this: FLAC__int32 errors[4]; FLAC__uint32 total_error_0 = 0; register FLAC__uint32 total_error_1 asm("r19") = 0; register FLAC__uint32 total_error_2 asm("r20") = 0; register FLAC__uint32 total_error_3 asm("r21") = 0; register FLAC__uint32 total_error_4 asm("r22") = 0; __asm__ ("movi v10.4s, #0" ::: "v10"); for (i = … Continue reading SPO600 Project – Stage 3 / Part 3

by jespiritutech at April 23, 2018 12:15 AM

April 22, 2018

Dan Epstein

Optimizing & Benchmarking SPO600 Project Stage 3

Recap on Stage 2

Previously on stage 2 I performed multiple benchmark tests on SHA256deep with altered build option of O3 compare to the current implemented one O2. I have tested the time that it takes to encrypt a small sized file of 10mb, 100mb and 1gb. The tests took place on multiple servers with different hardware’s and configurations. The first server is archie, which is equipped with ARMv8 (aarch64) architecture. Then I performed tests on bbetty and charile servers but they have the same architecture but, just with more memory. I have compared the benchmark results between archie and xerxes (x86_64 architecture). Here below is the benchmark results from stage 2. I have only included aarch64 and x86_64 for comparison because they are different architectures types.


Aarchie64 Server – O2









Xerxes x86_64 Server – O2









Aarchie64 Server – O3










Xerxes x86_64 Server – O3









Aarchie64 Server – O2









Xerxes x86_64 Server – O2









Aarchie64 Server – O3










Xerxes x86_64 Server – O3









Aarchie64 Server – O2









Xerxes x86_64 Server – O2









Aarchie64 Server – O3









Xerxes x86_64 Server – O3










The fastest server based on the results is archie64 with the improvement of 5.88% when encrypting a 1gb file using O3 flag. There is almost no difference when encrypting a small-sized file. For Xerxes, there the time decrease in about 0.38% for the 1gb file encryption. I’ve noticed while using a larger file, the O3, seems to be a better option to use. Unfortunately, there is not much of an improvement for small size files for all of the servers.

- %
Aarchie64 Server
- %
Xerxes x86_64

Then, in the next part of stage 2 I wanted to further optimize this project’s function: sha256_update (sha256deep). Sadly I couldn’t, because the code already seems to be optimized. The reason I believe this function is already optimized is that during my research I have found that memcpy is the fastest copy method in C. The other alternative to replace memcpy is to use inline assembler, which could be more efficient because you have more control over the data that is copied. The other factor that this code seems to be mostly optimized is that it’s using the right data types (uni8_t & unit32_t), which they are best used for storing small values.


As I have mentioned on my previous blog, I think this could be better optimized by using inline assembler to replace the memcpy function but, since I don’t have much experience with assembler language this couldn’t be implemented. The whole project experience was tough and challenging but, I do feel I have learned a lot from it. The hardest part was to find a function that could be potentially optimized. I’ve learned many techniques on how to evaluate a function and try to optimize it using the optimization that we learned in class, different build options and software profiling. However, I needed more experience and practice in order to fully optimize this project (to use inline assembler).

I have decided not to submit a pull request to the hashdeep project repository because it seems that it will take a very long time to get a response or for the changes to get accepted (there are pull requests that are still pending since February 2018). Therefore, there won’t be enough time to get this changes accepted and to report back. Overall, this was a great experience and hopefully, these blogs could help any students who will take SPO600 in the future.

by Dan at April 22, 2018 10:51 PM

Ruihui Yan

Project Initialization And Benchmarking

Because of problems encountered with the FFmpeg, I have decided to shift the project to HandBrake. HandBrake is an open-source to convert videos from any format to one that is compatible with modern devices.

First we update the list of packages available to install:

sudo apt-get update


Then we install the dependencies required for Ubuntu:

sudo apt-get install autoconf automake build-essential cmake git libass-dev libbz2-dev libfontconfig1-dev libfreetype6-dev libfribidi-dev libharfbuzz-dev libjansson-dev libmp3lame-dev libogg-dev libopus-dev libsamplerate-dev libtheora-dev libtool libvorbis-dev libx264-dev libxml2-dev m4 make patch pkg-config python tar yasm zlib1g-dev


Then we clone the HandBrake repository:

git clone && cd HandBrake


Now, we build the package:

./configure --launch-jobs=$(nproc) --launch --disable-gtk


We added --disable-gtk to disable the graphical interface, since we won’t use it.

Once it’s built, we can find the HandBrakeCLI in ./build.

For our project, we are going to use uncompressed Deadpool 2 Trailer:

Complete name               : \\XOXO\video\deadpool.mp4
Format                      : MPEG-4
Format profile              : Base Media / Version 2
Codec ID                    : mp42 (isom/iso2/avc1/mp41)
File size                   : 226 MiB
Duration                    : 2 min 30 s
Overall bit rate            : 12.6 Mb/s
Movie name                  : Deadpool 2 -
Encoded date                : UTC 2018-03-22 22:48:59
Tagged date                 : UTC 2018-03-22 22:48:59
Writing application         : HandBrake 1.0.7 2017040900

ID                          : 1
Format                      : AVC
Format/Info                 : Advanced Video Codec
Format profile              : High@L4.1
Format settings             : CABAC / 4 Ref Frames
Format settings, CABAC      : Yes
Format settings, ReFrames   : 4 frames
Codec ID                    : avc1
Codec ID/Info               : Advanced Video Coding
Duration                    : 2 min 30 s
Bit rate                    : 12.0 Mb/s
Width                       : 1 920 pixels
Height                      : 802 pixels
Display aspect ratio        : 2.40:1
Frame rate mode             : Variable
Frame rate                  : 23.976 (24000/1001) FPS
Minimum frame rate          : 23.974 FPS
Maximum frame rate          : 23.981 FPS
Color space                 : YUV
Chroma subsampling          : 4:2:0
Bit depth                   : 8 bits
Scan type                   : Progressive
Bits/(Pixel*Frame)          : 0.325
Stream size                 : 215 MiB (95%)
Writing library             : x264 core 148 r2708 86b7198
Encoding settings           : cabac=1 / ref=2 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=6 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=6 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=1 / keyint=240 / keyint_min=24 / scenecut=40 / intra_refresh=0 / rc_lookahead=30 / rc=2pass / mbtree=1 / bitrate=12000 / ratetol=1.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / vbv_maxrate=62500 / vbv_bufsize=78125 / nal_hrd=none / filler=0 / ip_ratio=1.40 / aq=1:1.00
Encoded date                : UTC 2018-03-22 22:48:59
Tagged date                 : UTC 2018-03-22 22:48:59
Color range                 : Limited
Color primaries             : BT.709
Transfer characteristics    : BT.709
Matrix coefficients         : BT.709

ID                          : 2
Format                      : AC-3
Format/Info                 : Audio Coding 3
Codec ID                    : ac-3
Duration                    : 2 min 30 s
Bit rate mode               : Constant
Bit rate                    : 640 kb/s
Channel(s)                  : 6 channels
Channel positions           : Front: L C R, Side: L R, LFE
Sampling rate               : 48.0 kHz
Frame rate                  : 31.250 FPS (1536 SPF)
Bit depth                   : 16 bits
Compression mode            : Lossy
Stream size                 : 11.5 MiB (5%)
Title                       : Surround / Surround
Language                    : English
Service kind                : Complete Main
Default                     : Yes
Alternate group             : 1
Encoded date                : UTC 2018-03-22 22:48:59
Tagged date                 : UTC 2018-03-22 22:48:59

The file can be downloaded here.

I will be converting the video using x265 codec, which provides better compressing rate without losing quality.

Here is the command I used for the conversation:

HandBrake/build/HandBrakeCLI -e x265  -i ~/deadpool.mp4 -o ~/output.mp4

Run 1:

Run 2:

We can notice the runtime for HandBrake to convert the video is around 14 minutes. My goal is to optimize it and improve that time.

by blacker at April 22, 2018 10:20 PM

Matteo Peruzzi

Observables and CORS

In some recent tinkering I’ve been doing on one of my Angular apps I can across a particular error I wasn’t expecting. I was in the process of building a service that would allow me to extract a summary of a wikipedia article (API request) when my requests kept getting denied.
The particular method I have been using to make API has been to use an Observable to get the request, now while this has worked on API’s like the NHLs with Wikipedia there are some security issues that are holding my data back. In my console debugger I get the following message.

No 'Access-Control-Allow-Origin' header is present on the requested resource.

After some further digging I found that this error comes about because my request didn’t have the proper headings required to access Wikipedia’s API a mechanism brought about by CORS.

What is CORS exactly? It stands for Cross-Origin Resource Sharing and simply serves as a barrier to protect from HTTP requests within scripts. CORS is generally used to help mitigate issues that arise from cross-origin HTTP requests. There are simple Chrome extensions that can be used to get around this, but from a developer perspective that doesn’t do us any good.

To properly get around CORS with our Observables is a little tricky unfortunately. If an API supports it one can instead make a JSONP (prefix) request during the GET. JSONP allows the code to treat the API as if it were a Javascript file script. This way of treating the request like a script can get around the issue of CORS.

So I converted my code to make use of JSONP:

getTeamWiki(name): Observable<any> {

const urlRoute='';

return this.http.jsonp(urlRoute + name, 'JSONP_CALLBACK');


Unfortunately while this did get rid of my Access-Control error I’m now faced with a new error regarding MIME type checking being to strict. Which is something I’ll have to leave for another blog post once I fix that issue.

It’s a little frustrating since just pasting that url into my search bar I can get the request on my browser. For example this search for ‘Bee’ on Wikipedia gives with the result I want exactly, but the server permissions are blocking my code from using it properly.

I’ll be posting this issue on various sites to see if I can get a community solution to my error.

by Peruzzi Blog at April 22, 2018 09:27 PM

Thomas Nolte

SPO600 Project Stage 3


This blog is for wrapping up the final project and discussing the results and experience of the work accomplished. First I will show an overview of what I wanted to do with the project, then I will examine my results of the work I did and finally I will conclude on my experience with the project as a whole.

Read more on the project here.

Finding the Software

To start this project I first wanted to find a open source software that was CPU intensive. At first I tested a few text compression software however the nature of those projects is to have the code already fairly optimized to begin with. After a few failures I sought out a different software to work on and found Kvazaar an open source video encoding project. Unlike the simpler smaller compression libraries I was looking through before Kvazaar was a large project with many source files. With tens of thousands of lines of code to go through I decided to focus of this software as the chances to find places to optimize was high. With so many source files I first had to benchmark the software to see where the software may be slowing itself down. After a few tests with the appropriate files the run time was long enough to give an accurate assessment on what functions I should focus on optimizing. After isolating 6 functions that took the large majority of the run time I began to delve into the code.

Read more on the software and my strategy in my stage 1 blog.

Code Changes and Results

Upon further examination of the code it was still clear that the code itself was written very well and understanding the long running functions would take a lot of time. The first thing was began to look for was if any of the loop could be combined to stop any unneeded iterations. This ended in failure as much of the code was separated as best it could code in the loops below reliant on the code in the loops above it. With this observations I turned my attention to how some of the variables were being initialized and used. If the variables where being initialized more often than needed some simple re positioning of the code could impact the run time significantly. As well as if some of the constant variables were not being used than simply in lining the value of the variable would stop any need of initializing and have the added benefit of saving memory space. These two approaches is where I found my strongest results as the longest running function had places to implement both changes.

The results of my changes looked like this:


The changes I made have improved the overall run time of the program but ~0.3% by making the longest running function 6% faster. However I believe the strongest result of my changes is the variance between multiple runs. As a video encoding software I assume anyone using it will use it many time for multiple videos. By making the run time have less variance estimated time taken to run the program many times will become much more accurate with my changes. With these changes showing strong results this is where I stopped my optimization changes and decided to begin wrapping up the project.

To read more on my code changes read my stage 2 blog.

Committing to upstream

I’ve shown my results to the community and am currently waiting on a response. As the community does not run on the same schedule as the class I have to blog about my results without completing the last piece of the project. With how minor my changes are to the code I believe that I will get my changes committed to the live version of the software.

Project Conclusion

This project really tested what I have learned in the this course and the last 5 semesters of college. Finding a viable open source software was the most time consuming part of this project as there was not a lot of ways to track down a CPU intensive program that would build easily on AR64. Once the benchmarking process was done I thoroughly enjoyed going through the source files analyzing what was code was running the slowest. The code changes I made are easy to understand but unsurprising that the first developer of the code looked over as they are simple readjustments of the code that makes it less readable. However this shows me that even after you write a program and get it to run there is still a good reason to go back and improve on your code. I would have liked to look for further changes in the code to show stronger improvement to the overall run time but didn’t want to get in over my head just before the end of the semester. However I am content with the results I got and look forward to getting my changes up streamed.

by nolte96 at April 22, 2018 08:42 PM

Dennis Arul

Another Update

Alright so after my previous attempt at optimizing by using the -O3 flag was not successful, I started looking into other flags that could potentially speed up the program. In my search I found the ofast flag. So what the ofast flag does is enables all of the -O3 optimizations another thing that it does is, it also enables optimizations that are not valid for all standard compliant programs. Below show the changes that I made to the make file,.ofastchange2.PNGofastchange.PNG

What I found interesting about this was the runtimes seemed to fluctuate more than the other 2 flags that I tried to run it with. On the first run I actually received a run time that was very similar to the other 2 methods, so going forward I assumed that I would get the same results as the test with the -O3 flag, but I found, that was not the case.fast1.PNG

The second run was faster than the first but not by much, this was approximately the same timing I got for one of the -03 runs, I understand that ofast does activate the O3 optimizations but it should not be almost identical as there are a few more optimizations being activated in ofast.


Finally for my 3rd fun after the changes I got a runtime that was in between the the first and the second. As I mentioned previously this was fluctuating but I was unsure why it was doing so. Even though it was slightly faster, the timing did not get too much better.


Again my efforts for optimizing the program ended in failure, going forward I would like to find out what cause the fluctuations in the timings to see if there is anything I can do to reduce the fluctuations and maybe even get it running faster.

by dparul at April 22, 2018 06:38 PM

Arsalan Khalid

A braver release, 0.3, Getting through life and open source

It’s been tough, it’s a real grind right now. Not a test against effort, time, or desires. But a test against myself, mastering my will power. I set out on a journey, become a great coder. Without doubt, it’s something that is virtually more difficult to do than say it, it will always be easier to say it, than to just do it. We can’t just do it though, we have distractions — which can really be in the form of anything that takes you away from what you enjoy, which in my case is just laying back, chilling, writing some code, and making some nice bread out of it. Up until recently I’ve been doing a lot of the management consulting style of work, you know client discussions, wine and dine, requirements gathering, it’s a whole different side of experience when it comes to seeing a business. You get a shape for what a product is, what people, let alone clients are really looking to buy. Not only when it comes to their own businesses, but selfishly it comes back to ourselves, and what benefit one gains through the development of ourselves. Thats why open source development is so powerful, its like you’re serving yourself by doing more software development, building your brand, but now you’re also building a project for anyone that uses software globally, AKA all humans

For this time around, I’m trying to build on the development I’m doing on the Brave browser, it’s been a lot of CSS, which has made it easy to impact and write code, hit the ground running (I’ve been jogging for a while). It’s been starting to get on my nerves a little bit, as I have the desire to be able to do more intensive, impactful development. I suppose impactful isn’t exactly what I’m trying to say, because all code written towards a project has it’s value, you don’t know when another developer will come along and pick it up, or even what a user may gain out of it. But still, being stuck in this land:

Just imagine, I had to find something called enabledContent__grant_text, and mess around with its attributes, locating this specific style field was a trek on its own. All this styling is in JavaScript (Aphrodite library) too by the way…, there’s a lot of different libraries here at play, and even a file filled with all the different UI text fields, in many different languages. Absolutely profound, as all of this was contributed to by the community supporting Brave! Just look at the number of files I had to mess around with to change a few text strings in their UI.

This can be of course tedious for development, but at its core I’ve learned that this is what a mature, and vast UIs looks like. I’ve really only seen smaller grade projects, a bit of production level services, but mostly servlets, and back-end systems.

React react react, appropriately, that’s how you live life, right?

There’s a lot of things going on, in the above screenshot, what exactly is going on? It looks like there’s this function or set of logic being executed in this block to ‘render’ something. Usually that’s associated with synthesizing some form of picture, colour, or art into virtual existence out of thin air. Many people can likely recognize all of the javascript mixed in with a set of HTML tags here. Especially when it starts with <div>, but likely what could throw many people off is the ‘<BrowserButton grouped item secondaryColor’ bits, the first thing one would see. First off, <BrowserButton> is a custom tag created within JSX, “a syntax extension to JavaScript. It is recommended to use it with React to describe what the UI should look like. JSX may remind you of a template language, but it comes with the full power of JavaScript”
 — JSX In Depth. Good reminder to know that the engineers at Facebook, and now the community have developed this stateful, event driven, composition web framework. Which has humble beginnings to pure PHP, when Zuckerberg was writing code in his dorm room.The LAMP stack! You know, when he created it almost 15 years ago. Using JSX, a Component has been generated, with a corresponding state associated with it. Within this state various visual properties have been set such as: groupedItem and primaryColor. Now saying this, acting all like I’ve become a scholar in React, I’m still learning my self. Which is why it’s always the best being able to reach out in the community on Brave’s Discord(in late April on a Sunday afternoon) with something like:

That being said, when looking at this closely:

primaryColor is setting the state’s colour property to be of the browser UI’s primary colour. Which is their defacto lion orange. It’s also saying, that the ‘l10Id’ is set to recover, which translates to, get text from the browser’s text dictionary, where each of these id’s is associated with a few sentences in a selected language of choice. Since recover is set, this component will display the text associated with recover in the default set GB-EN. Which makes me think of my ever eternal debate with my colleague who says English will always be associated with the British, who invented English. That’s why it’s called English. Not American or Canadian. Further, there is some logic with the recognizable onClick attribute: {this.recoverWallet}. Which is calling a method inside the broader ImmutableComponent, LedgerRecoveryFooter. If you’re picking up on the wording here, you’ll notice that this component is related to recovering something. Which is incidentally to restore the browser’s ledger. As a feature to the browser, this is pretty cool — since this is providing the user with the ability to update any browser with a ledger of their choice, representing a user’s elapsed time on a Basic Attention Token publisher site. It’s all connected to the code we’re writing, to what these buttons in React are really doing. Finally, noticing the last attribute being set in the Component, custom={styles.recover__button}. Setting this button’s icon to have a button style view in Brave. Which can also be replaced to by a picture, icon, or text.

Looking at something so simple as just 4 lines of code, but then looking upwards at the real depth and reality of what this code actually means from a product sense, is profound. Which is what these simple buttons are doing, in combination with the development approach at making something as simple as a button. Creating your own custom HTML tag, a state tied to it, setting how it will look, the colour, and text, without looking at the stylesheets or backend. Ladies and gentlemen I’d say this is what scaleable means! Check out more of my code for this pull request, which includes my third release here at:

I’d say this sums up my short blab and learnings from web development within the brave browser, and getting a closer look at what the real ‘ledger’ component famous ‘blockchain’ piece of the browser is.

Thanks for tuning in!

by Arsalan Khalid at April 22, 2018 05:36 PM

Sanjit Pushpaseelan

SPO600- Failed assembly implementation

After spending the last two days trying to implement assembly language I as sad to say I failed to produce any notable results with my implementation.

To start off, I re-benchmarked my results for MD5deep since I was getting weird results when I ran some tests a few days ago (several of my tests were getting me 12s+ runtimes for some reason which I found odd). I think the server was just busy or something since my runtimes have gone back to normal. Anyways, I will post my new benchmarks so you have a reference for the upcoming benchmark.


Luckily, my results seem to be back to normal. I am at an average runtime of 9.3 seconds which is basically what I got during my last testing.


So sadly for most of my attempts to implement assembly language ended in failure. Due to my inexperience with assembly language, I was unable to find the proper documentation for ARM assembly language. Most of the assembly I found in rewolf I was unable to even write for ARM since either I couldn’t find a way to convert it into ARM or I was just unable to figure out what the code was doing. I’m not going to bother posting the code I attempted to convert into ARM assembly but I will list what I was trying to do with each conversion.


  1. Convert the bit shifting functions (F1, F2, etc…) into assembly. I was never able to figure out how to properly move the registers to emulate what the bit shift functions were doing before.


2. Write a function to process the message passed into MD5 by using state transformations


I was unable to get to implement the functions to do the above but I was able to implement one other function.


#define DISPLAY(x,w) ( __asm__(“ror %%cl,%0″:”=r” (x):”0″ (x),”c” (32- n));)

This code was interesting to write cause I had no idea that ARM lacked some of the commands that X86 had so I had to get creative with this solution. Instead of rotating left like the original code, I had to rotate right all the way to the other side so it would be in the same place it would be if I rotated left with the passed int value. Below you will find my results to compiling with this function implemented.



Sadly, as you can see my runtime is around 9.3, the same as my initial run. This means that my efforts were sadly in vain.


Moving forward


Seeing as this project is due tomorrow, I will not have time to try and adopt a new plan of attack for this project. I will be posting a blog post that has summarized the work I have done in the past 2 months and what I have taken away from the work I’ve done.

by sanjitps at April 22, 2018 02:13 PM

Justin Vuu

SPO600 – Stage 3 – Finale

Because I won’t be available for most of Sunday to write this blog, I’ll write it now.

Summary of Progression

I had difficulty finding a project to work on. I tried a variety of different projects on Github, but many of them had problems being installed on our servers, such as requiring immintrin.h. With stage 1’s deadline approaching, I hastily picked a project what I would end up abandoning because the only way to benchmark it was infeasible. I later settled on HashRat after learning how to generate a large file of random data.

My approach to optimizing HashRat is akin to throwing things at a wall and seeing what sticks. I went through everything from changing the optimization level to trying inline assembler. My biggest breakthroughs, I think, are when I discovered the second Makefile that was responsible for compiling the algorithms, and how to enable the optimized version of the transform function.

Increasing the optimization level in the other Makefile had a small but noticeable improvement. It was enabling the optimized function that made the biggest improvement. This is because the function unrolled most of the loops. Unfortunately, my attempt at inline assembler turned out fruitless.

In the end, I created a pull request that only enabled the optimized function by default. I didn’t want to use the O3 flag because I hadn’t tested it on the other algorithms used by HashRat. I submitted an issue and pull request, but so far haven’t seen a reply.

Analysis of Results

Running the original build on Xerxes, BBetty, and AArchie produced different run times. This was expected, especially for BBetty, because of their different specifications and architecture. Xerxes and AArchie hashed a 3GB file in about 36.9 seconds, while BBetty was the slowest taking 45.5 seconds.

With just the O3 flag in the proper Makefile, I saw a very small improvement. On AArchie, there was about a half second improvement. Xerxes saw a greater improvement of about 1 second. I didn’t test this approach on BBetty.

With the unrolled function being used in addition to O3, all 3 servers saw a significant increase in performance. Interestingly, the improvements were much better on the AArch servers compared to Xerxes. Xerxes was only 11% faster in total. AArchie, on the other hand, shaved a quarter of its time.

So how is it unrolled? Let me link a gist of the functions so you can see the differences:

In the slower version, the loops are iterated 16 and 64 times respectively. This will likely produce many jump instructions. Values of the variables are also being changed at the end of each iteration.

In the unrolled version, the loops are only iterated 2 and 8 times respectively. Directives set up to take in values like a function are used 8 times per iteration. Because the compiler replaces every reference to the directive with its value, it’s essentially like 8 inlined functions. This also means the program won’t need to swap the values of variables, it simply changes the order they’re passed in.

2 times
as opposed to this
16 times.
And that’s not even counting the otherworldly math going on.


I think this is possibly the hardest project I’ve ever been given in my academic career.

I think I could have improved my process if I had a better understanding of how to optimize. I’ve honestly felt very lost throughout the course, and this project was very daunting. I think I only have myself to blame because the professor gave every opportunity to clarify topics covered in class and help with projects.

It was great that I managed to find something that helped improve performance, and it was right there in the code.

However, I wish I was able to figure out assembler. The reason I dared to try inline assembler was that I wanted to push myself. I was inspired by someone else’s attempt at rewriting the same function in assembler. Unfortunately, I couldn’t figure it out and didn’t make any progress on that side.

I also should have attempted to contact the project’s author from the beginning. I forked the project and worked on it without ever trying to communicate with them. I feel that this might have made my issue and pull request come off as rude.

I also should have looked for a more active project. Having an actual community or active authors would have helped me greatly in understanding their project, where it needs improvement, and how to improve it.

One takeaway from this course is that I should always review my code thoroughly to check for any unnecessary calculations, unused variables, and minimize the use of loops with many iterations.



by justosd at April 22, 2018 09:18 AM

Matt Rajevski

SPO600 Project – Part 2[update]

So I’ve gone through most of the major functions that the benchmark program went over and did some individual test on them before and after I made some changes to the source code.

As I mention before, the functions had high efficiency ratings and that really showed when looking at the source code. Everything was done in a very clean and efficient method. It is a compression software so the logic behind it is complex in nature, but I didn’t expect it to be this complex.

Here are some examples of highly complex code found in the major functions:

// LZMA2 decode to Buffer //
SRes Lzma2Dec_DecodeToBuf(CLzma2Dec *p, Byte *dest, SizeT *destLen, const Byte *src, SizeT *srcLen, ELzmaFinishMode finishMode, ELzmaStatus *status)
  SizeT outSize = *destLen, inSize = *srcLen;
  *srcLen = *destLen = 0;
  for (;;)
    SizeT srcSizeCur = inSize, outSizeCur, dicPos;
    ELzmaFinishMode curFinishMode;
    SRes res;
    if (p->decoder.dicPos == p->decoder.dicBufSize)
      p->decoder.dicPos = 0;
    dicPos = p->decoder.dicPos;
    if (outSize > p->decoder.dicBufSize - dicPos)
      outSizeCur = p->decoder.dicBufSize;
      curFinishMode = LZMA_FINISH_ANY;
      outSizeCur = dicPos + outSize;
      curFinishMode = finishMode;

    res = Lzma2Dec_DecodeToDic(p, outSizeCur, src, &srcSizeCur, curFinishMode, status);
    src += srcSizeCur;
    inSize -= srcSizeCur;
    *srcLen += srcSizeCur;
    outSizeCur = p->decoder.dicPos - dicPos;
    memcpy(dest, p->decoder.dic + dicPos, outSizeCur);
    dest += outSizeCur;
    outSize -= outSizeCur;
    *destLen += outSizeCur;
    if (res != 0)
      return res;
    if (outSizeCur == 0 || outSize == 0)
      return SZ_OK;
// Part of a BZip2 encode function that is 285 line long //
      UInt32 remFreq = numSymbols;
      unsigned gs = 0;
      unsigned t = numTables;
        UInt32 tFreq = remFreq / t;
        unsigned ge = gs;
        UInt32 aFreq = 0;
        while (aFreq < tFreq)            aFreq += symbolCounts[ge++];         if (ge > gs + 1 && t != numTables && t != 1 && (((numTables - t) & 1) == 1))
          aFreq -= symbolCounts[--ge];
        Byte *lens = Lens[t - 1];
        unsigned i = 0;
          lens[i] = (Byte)((i >= gs && i < ge) ? 0 : 1);
        while (++i < alphaSize);
        gs = ge;
        remFreq -= aFreq;
      while (--t != 0);
// Deflate bit reversal function //
NO_INLINE void Huffman_ReverseBits(UInt32 *codes, const Byte *lens, UInt32 num)
  for (UInt32 i = 0; i < num; i++)
    UInt32 x = codes[i];
    x = ((x & 0x5555) << 1) | ((x & 0xAAAA) >> 1);
    x = ((x & 0x3333) << 2) | ((x & 0xCCCC) >> 2);
    x = ((x & 0x0F0F) << 4) | ((x & 0xF0F0) >> 4);
    codes[i] = (((x & 0x00FF) << 8) | ((x & 0xFF00) >> 8)) >> (16 - lens[i]);

Despite the impressive optimizations of the code, I was able to find a few very small optimizations.

In 7zCrc.c I found a strength reduction in CrcGenerateTable()

// Before //
  for (; i < 256 * CRC_NUM_TABLES; i++)   {      UInt32 r = g_CrcTable[i - 256];      g_CrcTable[i] = g_CrcTable[r & 0xFF] ^ (r >> 8);
// After //
 UInt32 j = 256 * CRC_NUM_TABLES;
  for (; i < j; i++)   {
     UInt32 r = g_CrcTable[i - 256];
     g_CrcTable[i] = g_CrcTable[r & 0xFF] ^ (r >> 8);

This will remove the multiplication from the loop, and this saves at least 256 calculations based on the value of CRC_NUM_TABLES. Unfortunately the results were the same as before the change.

// CRC32 benchmark results post changes //
[mrrajevski@aarchie p7zip_16.02]$ 7za b "-mm=CRC32"

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs LE)

CPU Freq:  1995  1999  1999  1998  1999  1998  1998  1999  1999

RAM size:   16000 MB,  # CPU hardware threads:   8

Size      1      2      3      4      6      8

10:     640   1280   1920   2558   3775   4274
11:     644   1289   1933   2577   3786   4139
12:     646   1293   1939   2586   3854   4413
13:     647   1295   1938   2584   3779   4199
14:     647   1283   1934   2579   3739   4373
15:     642   1292   1913   2555   3817   4205
16:     642   1285   1924   2564   3830   4171
17:     642   1282   1927   2561   3790   4266
18:     643   1287   1929   2566   3834   4244
19:     644   1288   1930   2519   3778   4147
20:     642   1266   1897   2522   3763   3644
21:     638   1265   1894   2513   3712   3756
22:     636   1262   1887   2487   3715   4266
23:     633   1258   1882   2509   3729   4211
24:     628   1256   1885   2512   3719   4285

Avg:    641   1279   1916   2546   3775   4173

In Lzma2Dec.c I found an if statement that compared against a multiplied value that consisted of fixed vars.

// Before //
if (b >= (9 * 5 * 5))
// After //
if (b >= (225))

While this is only an if statement, the function containing it is called within a while loop so it has the potential to save at least 2 calculations based on the while statement. Unfortunately the results were the same as before the change

// LZMA2 benchmark results post change //
[mrrajevski@aarchie p7zip_16.02]$ 7za b "-mm=LZMA"

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs LE)

CPU Freq:  1994  1993  1999  1999  1998  1998  1999  1999  1999

RAM size:   16000 MB,  # CPU hardware threads:   8
RAM usage:   1765 MB,  # Benchmark threads:      8

                       Compressing  |                  Decompressing
Dict     Speed Usage    R/U Rating  |      Speed Usage    R/U Rating
         KiB/s     %   MIPS   MIPS  |      KiB/s     %   MIPS   MIPS

22:       9646   511   1838   9384  |     130144   563   1973  11101
23:       9897   558   1807  10085  |     127691   560   1974  11050
24:       9272   551   1811   9969  |     124665   557   1966  10942
25:       9002   548   1875  10279  |     114883   526   1945  10224
----------------------------------  | ------------------------------
Avr:             542   1833   9929  |              551   1965  10829
Tot:             547   1899  10379


The last thing I managed to find was in BZip2Encoder.cpp

// Before //
  for (unsigned i = 0; i < 4; i++)      WriteByte2(((Byte)(v >> (24 - i * 8))));
// After //
  for (unsigned i = 0; i < 32; i += 8)
     WriteByte2(((Byte)(v >> (24 - i))));

This removed the multiplication from inside the loop. This function is called within a recursive function, so there is potential to save 4 calculations per recursive function call. But yet again the improvements, if any, were negligible.

// BZip2 benchmark results post change //
[mrrajevski@aarchie p7zip_16.02]$ 7za b "-mm=BZip2"

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs LE)

CPU Freq:  1998  1998  1999  1999  1999  1999  1998  1999  1999

RAM size:   16000 MB,  # CPU hardware threads:   8
RAM usage:   1765 MB,  # Benchmark threads:      8

                       Compressing  |                  Decompressing
Dict     Speed Usage    R/U Rating  |      Speed Usage    R/U Rating
         KiB/s     %   MIPS   MIPS  |      KiB/s     %   MIPS   MIPS

22:      16312   661   1491   9855  |      60175   661   1034   6837
23:      15738   654   1453   9509  |      57310   646   1024   6613
24:      15143   640   1429   9149  |      55011   643   1001   6441
25:      15673   675   1402   9469  |      53924   649    987   6403
----------------------------------  | ------------------------------
Avr:             658   1444   9496  |              650   1012   6574
Tot:             654   1228   8035

So far I have been unable to find any improvements to the program that are noticeable, but I will continue to search for more and update with what I find.

by mrrajevski at April 22, 2018 06:16 AM