Planet CDOT

August 15, 2018


Mat Babol

DPS911 - Release 5

Jest and Puppeteer testing


For my last release, I am attempting to use Jest and Puppeteer for testing my torrent sharing pages. Jest is a unit testing framework for ReacjtJS projects, and it is made by FaceBook. The PR can be found here.

I am still learning Jest & Puppeteer and how it all works together. Currently, I am creating a tree of files before the torrent starts, then once the torrent is complete, I am testing again whether those files are created. This PR is still a work in progress, I am still working on it to fully complete it, and have jest run the torrent itself.

by Mat Babol (noreply@blogger.com) at August 15, 2018 04:08 AM

August 08, 2018


Mat Babol

DPS911 - Release 3

This is my third release for my Open Source class. This release is a progression on my the second release, where I used WebTorrent for sharing resource files. In the previous release, I created two different pages, one for importing files and one is exporting files. With this release, I joined the two files together, and added a few new features, such as number of peers, and download progression.


 For testing purposes, we have a temporary main page for testing things like the editor, the Linux terminal, or file editor. I've added the share link in there as well for easy access during testing. 



The look of the page is elegant now, it looks professional compared to the previous iteration. It displays the progress of the download, remaining time, download speed, download progress, upload speed, number of peers, and the total time. Once the torrent is complete, a message pop ups on the bottom saying it is complete. To start the torrent, the Start Seed button needs to be clicked first.


A message will display showing the magnet URI, which can then be used for downloading.

For testing purposes, I'm sharing resource files between two different browsers, one on Chrome and one on Firefox.


Once the magnet URI is copied into the text box and the torrent is started, the download information is displayed below.



by Mat Babol (noreply@blogger.com) at August 08, 2018 02:03 AM

August 06, 2018


Mat Babol

DPS911 - Release 2

This is my second release for the DPS911 - Open Source class. With this release, I created two pages for sharing resource files between users, one was for importing the files, and the other was for exporting. I am using Webtorrent streaming service for sharing the files. The pull request can be found here, while the issue is located here

I did have some trouble with this release. Initially, I had the user selecting local files to share instead of resource files. The pages themselves are very simple, most of the work was done in the back-end. The import page only has an input field for the magnetURI, and a download button download all the files. The export page automatically starts the torrent when opened, sharing all of the resource files, and displaying the magnetURI.

This release was simple, there's no extra features included such as number of seeders, progress, or number of files. All this will be included in the next release, which will include combining the pages into one, and  giving detailed information to the user about the torrent.

With this release, I learned a little about promises in JavaScript. A promise is an object that may produce a value in the future,either a resolves value or a not resolved value. I've used multiple promises in this release with the torrent.

by Mat Babol (noreply@blogger.com) at August 06, 2018 06:50 PM

July 26, 2018


Arsalan Khalid

What’s the benefit to using Angular’s mock data module with json-server, instead of just using…

What’s the benefit to using Angular’s mock data module with json-server, instead of just using json-server on its own?

by Arsalan Khalid at July 26, 2018 04:14 PM

July 15, 2018


Henrique Coelho

Experimenting with JSON (structured and unstructured data) and Go

As I said in my previous post, I've been experimenting with a few different languages lately, and I was curious to know how Go handles JSON - more specifically, I want to know how it handles structured and unstructured data.

Structured data

Suppose we have a JSON string like this:

{
    "name": "John",
    "age": 34
}

To handle a data structure like this, we can first build a struct to describe its model:

// `json:___` instructs the Marshal/Unmarshal functions how to map
// the properties from the JSON string with the struct. They are not
// necessary in this case because they have the same name
type Person struct {
    Name string `json:"name"`
    Age  string `json:"age"`
}

To convert from and to JSON, we can use the functions json.Unmarshal and json.Marshal, which are analogous to JSON.parse and JSON.stringify in JavaScript. Take a look at this example, where I take an input string, parse it into a struct, and then stringify the struct back:

package main

import (
    "encoding/json"
    "fmt"
)

type Person struct {
    Name string `json:name`
    Age  int32  `json:age`
}

func main() {
    inputString := []byte(`
    {
        "name": "John",
        "age": 34
    }`)

    var person Person

    // Analogous to "person = JSON.parse(inputString)"
    json.Unmarshal(inputString, &person)

    // Result struct: {Name:John, Age:34}
    fmt.Printf("Result struct: %v\n", person)

    // Analogous to "stringifedPerson = JSON.stringify(person)"
    stringifiedPerson, _ := json.Marshal(&person)

    // Stringified from struct: {"Name":"John","Age":34}
    fmt.Printf("Stringified from struct: %s\n", stringifiedPerson)
}

Not bad at all. But what about nested objects and arrays? Let's increase the complexity a little bit with this JSON:

{
    "name": "John",
    "age": 34,
    "address": {
        "street": "1 Front Street",
        "unitNo": 123
    },
    "cars": [
        {
            "model": "Honda Civic",
            "year": 2015
        },
        {
            "model": "Toyota Corolla",
            "year": 2013
        }
    ]
}

Now we have nested objects and also an array of objects. Let's modify the struct(s) to handle this data:

type Address struct {
    Street string `json:"street"`
    UnitNo int32  `json:"unitNo"`
}

type Car struct {
    Model string `json:"model"`
    Year  int32  `json:"year"`
}

type Person struct {
    Name    string  `json:"name"`
    Age     int32   `json:"age"`
    Address Address `json:"address"`
    Cars    []Car   `json:"cars"`
}

Now let's see what Go can do (I formatted the output so it is better to read):

Result struct: {
  Name:John
  Age:34
  Address:{Street:1 Front Street UnitNo:123}
  Cars:[
    {Model:Honda Civic Year:2015}
    {Model:Toyota Corolla Year:2013}
  ]
}
Stringified from struct: {
  "name":"John",
  "age":34,
  "address":{
    "street":"1 Front Street",
    "unitNo":123
  },
  "cars":[
    {"model":"Honda Civic","year":2015},
    {"model":"Toyota Corolla","year":2013}
  ]
}

Very cool! It seems like nested properties are not a problem at all!

Structured data + optional properties

The next thing I wanted to know is: what if I have optional properties? Besides just using the regular structs that we made, there are two additional combinations I want to test:

  1. Using a pointer instead of a regular variable. I believe this will set the values as null if they are not present.
  2. Using the option omitempty inside the json:"__" directive. I believe it will not have impact on how the JSON gets parsed, but it will have an impact on the result after it is stringified.

To test this, I made 8 combinations of properties

  1. Regular string property
  2. Regular string property with omitempty
  3. String pointer
  4. String pointer with omitempty
  5. Object property
  6. Object property with omitempty
  7. Object pointer
  8. Object pointer with omitempty

Here are my structs:

/*
Naming:
Ex = "explicit", no "omitempty"
Om = with "omitempty"
Val = not a pointer
Ptr = pointer
Obj = object
*/
type SubStruct struct {
    ExVal string  `json:"exVal"`
    OmVal string  `json:"omVal,omitempty"`
    ExPtr *string `json:"exPtr"`
    OmPtr *string `json:"omPtr,omitempty"`
}

type MainStruct struct {
    ExVal    string     `json:"exVal"`
    OmVal    string     `json:"omVal,omitempty"`
    ExPtr    *string    `json:"exPtr"`
    OmPtr    *string    `json:"omPtr,omitempty"`
    ExValObj SubStruct  `json:"exValObj"`
    OmValObj SubStruct  `json:"omValObj,omitempty"`
    ExPtrObj *SubStruct `json:"exPtrObj"`
    OmPtrObj *SubStruct `json:"omPtrObj,omitempty"`
}

And here are my two repetitions. One of them is completely empty, while the other one is completely filled:

rep1 := []byte(`
{
}`)

rep2 := []byte(`
{
    "exVal": "string 1",
    "omVal": "string 2",
    "exPtr": "string 3",
    "omPtr": "string 4",
       "exValObj": {
        "exVal": "string 1",
        "omVal": "string 2",
        "exPtr": "string 3",
        "omPtr": "string 4"
       },
       "omValObj": {
        "exVal": "string 1",
        "omVal": "string 2",
        "exPtr": "string 3",
        "omPtr": "string 4"
       },
       "exPtrObj": {
        "exVal": "string 1",
        "omVal": "string 2",
        "exPtr": "string 3",
        "omPtr": "string 4"
       },
       "omPtrObj": {
        "exVal": "string 1",
        "omVal": "string 2",
        "exPtr": "string 3",
        "omPtr": "string 4"
       }
}`)

And here are the results:

All properties empty (struct)
{
    ExVal:
    OmVal:
    ExPtr:<nil>
    OmPtr:<nil>
    ExValObj: {
        ExVal:
        OmVal:
        ExPtr:<nil>
        OmPtr:<nil>
    }
    OmValObj: {
        ExVal:
        OmVal:
        ExPtr:<nil>
        OmPtr:<nil>
    }
    ExPtrObj:<nil>
    OmPtrObj:<nil>
}

When unmarshalling (parsing), the omitempty directive apparently makes no difference. The difference we see relates to the type of the variable: if it is a regular string, it is empty, if it is a pointer, it is a null pointer.

Now let's see with the filled properties.

All properties filled (struct)
{
    ExVal: string 1
    OmVal: string 2
    ExPtr: 0xc42000e250
    OmPtr: 0xc42000e260
    ExValObj: {
        ExVal: string 1
        OmVal: string 2
        ExPtr: 0xc42000e270
        OmPtr: 0xc42000e280
    }
    OmValObj: {
        ExVal: string 1
        OmVal: string 2
        ExPtr: 0xc42000e290
        OmPtr: 0xc42000e2a0
    }
    ExPtrObj: 0xc420078270
    OmPtrObj: 0xc4200782a0
}

Again, no difference for the omitempty directive. The only difference is the type of the variable: strings get their string value, while string pointers are populated with the address of the string. Now let's see what happens when we stringify these structs back.

All properties empty (stringified)
{
    "exVal": "",
    "exPtr": null,
    "exValObj": {
        "exVal": "",
        "exPtr":null
    },
    "omValObj": {
        "exVal":"",
        "exPtr":null
    },
    "exPtrObj":null
}

This is more interesting. In this case, we are missing three properties: omVal, omPtr, and omPtrObj. These properties were not included in the string for two reasons: they were either null or an empty primitive, and they had the directive omitempty.

Sometimes we want to return null values from APIs, and using pointers in this case seems to be a good way to do it. We can return a regular value from it by assigning a value to a pointer, or assigning nil to it in order to send a null value.

All properties filled (stringified)
{
    "exVal":"string 1",
    "omVal":"string 2",
    "exPtr":"string 3",
    "omPtr":"string 4",
    "exValObj": {
        "exVal":"string 1",
        "omVal":"string 2",
        "exPtr":"string 3",
        "omPtr":"string 4"
    },
    "omValObj":{
        "exVal":"string 1",
        "omVal":"string 2",
        "exPtr":"string 3",
        "omPtr":"string 4"
    },
    "exPtrObj":{
        "exVal":"string 1",
        "omVal":"string 2",
        "exPtr":"string 3",
        "omPtr":"string 4"
    },
    "omPtrObj":{
        "exVal":"string 1",
        "omVal":"string 2",
        "exPtr":"string 3",
        "omPtr":"string 4"
    }
}

No big surprises here. The only thing I found notable is that the Marshal function was smart enough to dereference the pointers, so their real values (and not addresses) were printed!

Unstructured data

Go handles JSON structured data really well, but how about unstructured? Let's say I have this payload here:

{
    "portugal": "lisbon",
    "spain": "madrid",
    "france": "paris",
    "germany": "berlin",
    "netherlands": "amsterdam"
}

This file contains the capital of a few european countries. Of course, it could contain completely different countries. How would we Marshal/Unmarshal this?

The problem here is that we can not use Structs for this case, since we don't have a predefined structure. What we can do is treat the object as a key-value map. Keys, in this case, would be strings, as well as the values. So instead of recording the parsed data into a struct, we can record it into a map[string]string!

Here is the code:

package main

import (
    "encoding/json"
    "fmt"
)

func main() {
    unstructuredData := []byte(`
    {
        "portugal": "lisbon",
        "spain": "madrid",
        "france": "paris",
        "germany": "berlin",
        "netherlands": "amsterdam"
    }`)

    var parsedData map[string]string

    json.Unmarshal(unstructuredData, &parsedData)

    // Resulting map: map[portugal:lisbon spain:madrid france:paris germany:berlin netherlands:amsterdam]
    fmt.Printf("Resulting map: %+v\n", parsedData)

    // Here is how we can access a property!
    // "The capital of Portugal is lisbon"
    fmt.Printf("The capital of Portugal is %s\n", parsedData["portugal"])

    stringifiedData, _ := json.Marshal(&parsedData)

    // {"france":"paris","germany":"berlin", "netherlands":"amsterdam",
    //  "portugal":"lisbon","spain":"madrid"}
    fmt.Printf("%s\n", stringifiedData)
}

But let's say that our JSON is a little more complex:

{
    "countries": {
        "portugal": "lisbon",
        "spain": "madrid"
    },
    "cities": [
        "oslo",
        "london",
        "milan"
    ]
}

How can we parse this? In this case, we are still dealing with a map with string keys, but the values are now of any type. We can use interface{} (empty interface) to identify this type:

var parsedData map[string]interface{}

json.Unmarshal(unstructuredData, &parsedData)

// Resulting map: map[countries:map[portugal:lisbon spain:madrid] cities:[oslo london milan]]
fmt.Printf("Resulting map: %+v\n", parsedData)

Now let's go down one level and get a capital of a country again. Since we said that the value types are interface{}, we can't use the type map[string]string anymore. Instead, we will need to use map[string]interface{}:

countriesData := parsedData["countries"].(map[string]interface{})

// The capital of Spain is madrid
fmt.Printf("The capital of Spain is %s\n", countriesData["spain"])

Another example, now casting the list of cities into an array of interfaces and then printing them:

// oslo
// london
// milan
for _, city := range parsedData["cities"].([]interface{}) {
    fmt.Println(city)
}

Now let's finish by stringifying it back and printing it:

stringifiedData, _ := json.Marshal(&parsedData)

// {"cities":["oslo","london","milan"],"countries":{"portugal":"lisbon","spain":"madrid"}}
fmt.Printf("%s\n", stringifiedData)

Edge cases

  • What happens if my JSON string has a property not mapped in the struct?

It gets ignored.

  • What happens if I try to access something that doesn't exist in the maps for unstructured data?

If you just try reading its value, you get an empty value back. If you try using it as a list or an object, the program panics. You just need to check if the property exists or not before accessing it.


I must say that I am very impressed by how Go handles JSON. The type system is very simple, yet robust and powerful. We were able to parse structured and unstructured data very easily, and we easily converted it back into strings.

by Henrique at July 15, 2018 07:58 PM

July 14, 2018


Henrique Coelho

A better language to replace backend JavaScript?

I've been experimenting with different programming languages recently, especially because I am getting sick of Node.js. There are so many problems with JavaScript, and instead of being fixed, there are more and more useless features being pushed into the language every iteration of EcmaScript. This is very frustrating, and it makes me look at simplistic languages like Go with a lot of hope.

You are probably not thinking "what is wrong with Node.js?". Well, saying that JavaScript is not an excellent language and pointing out its flaws is pointless nowadays (really, it's 2018 and there are still people making articles "why JavaScript is terrible? the answer will shock you!"? Give me a break). Look, I don't have a problem with with JS per se - I think it is a really cool language. I'm just not a big fan of how bloated the language is becoming. There are tons of useless features being added, while the old, terrible ones are not being removed. I'll give you a few examples (yes, I will do exactly what I criticized in this same paragraph):


New feature: Symbols

Symbols are absolutely beautiful, aren't they? They are supposed to be a solution for creating privacy in objects, if you don't want to expose public methods. For example:

function makeObject() {
    return {
        myPrivateMethod: () => { ... },
        myPublicMethod: () => { ... },
    };
}

const o = makeObject();
o.myPrivateMethod(); // Oh no, people can access this!

We can achieve privacy by using a symbol:

function makeObject() {
    const myPrivateMethodKey = Symbol('myPrivateMethod');
    const r = {
        // Now you can only access this method if you have access to
        // the "myPrivateMethodKey" variable!
        [myPrivateMethodKey]: () => { ... },
        myPublicMethod: () => { ... },
    }

    r[myPrivateMethodKey](); // I can still access my private method. That's good :D

    return r;
}

const o = makeObject();
o.myPrivateMethod(); // Doesn't work!

Fantastic, isn't it? Well, no. One thing that people ignored is that you can have the same thing by inserting a _ in the beginning of the method name. It is widely known that a _ in the beginning means that that method is private and you should not access it. This has been used by languages without that don't implement private methods such as Perl for a long time.

"No!" - you may be thinking. "Appending a _ in the beginning is different than using symbols, because with symbols you can actually achieve true privacy, while a underscore won't prevent people from accessing it". Well, no. JavaScript also gives you a method called getOwnPropertySymbols which gives you a list of all the private methods using symbols, and you can iterate though this list and call all of them. This completely defeats the purpose.

This feature is also often misused, for example, Google Datastore (a distributed database) uses the symbol KEY to reference the primary key of an entry. Which means that if you want to read the value of the primary key, you need to access the "private" property using the KEY Symbol, which they export from their library. This is probably the most inconvenient way to access a primary key I've ever seen.

"Wait! But just because people don't use it correctly doesn't mean it is a bad feature!" - Sure. It just makes it as effective as prefixing properties with a _. Whew. I'm glad we introduced this new feature that only introduces complexity and does not solve any problems.


Old defects being kept: the "==" operator and "null"

I think it's a general consensus in the JS community that the == operator should be abolished. If you don't know what I am talking about, JS has two operator for equality == and ===, with != and !== as their counterparts. These are the differences:

1    === 1         // true
1    === '1'       // false
1    === ''        // false
''   === 0         // false
0    === false     // false
null === undefined // false

1    == 1         // true
1    == '1'       // true
1    == ''        // false
''   == 0         // true
0    == false     // true
null == undefined // true

The === operator evaluates equality based on type and value, while the == operator only evaluates value. An empty string ('') and false both evaluate to a falsy value, this is why '' == 0.

This may seem like a powerful feature. It is indeed powerful, but it is a terrible powerful feature because it makes the code very difficult to maintain and read. When I am reading code that uses use a == instead of a === I don't know if that was intentional or not. Do we need the == because we expect some kind of type casting there? Did the programmer just forget to type another =? Why do we need it? Isn't the input already validated? Shouldn't we be validating the input before sending it to this function? Why must you do this to me?

Now, about null: the default "there is no value here" value for JavaScript is undefined. If you try to access a variable that does not exist, that's undefined. If you try to access a property that does not exist, that's undefined. Undefined here, and undefined there. But wait! There is more! Null was created for when you can't have enough "undefineds" and there is yet another value to do the same thing! To make things even better, null is completely broken. Look at this:

> null === undefined
false
> typeof undefined
'undefined'
> typeof null
'object' // ???

I like how the type of null is object, but if you try to access any property inside null you get an error thrown. If you have things returning null in your codebase, it's not only required to check if something is undefined/an object or not, you have to do this:

if (typeof a === 'object' && a !== null) { ... }

Or this

if (a !== undefined && a !== null) { ... }

For the love of god, please remove null from the language, already!


I don't want to make this post about every single thing I dislike about JS (we have enough of these on the internet already, besides, my blog's database doesn't have enough storage space), although I would love to expose my controversial opinions about why Classes are the worst feature brought to Node.js. The thing is: I value simplicity. If there are two things in the language that achieve the same thing, one of them must be removed. Yes, I know we will be breaking backwards compatibility and blah blah blah. Look: we have semantic versioning for a reason. We have webpack and babel for a reason too. Let bad features die! If you want to have a nice garden, you add the plants that are necessary and fit in the bigger picture, and remove what is ugly and rotten. We must treat our tools and programs the same way.

Of course, there are always arguments that try to justify the mistake: "Null and Undefined are not the same! Undefined is a way to express that it was never defined, while null is a way to say that we explicitly set it to null". These features not only increase complexity and bugs while not adding anything useful, but it also makes programmers waste their time with long, useless arguments. The issue is not "are null and undefined different conceptually?", but "are differences relevant enough justify a totally different data type?" And no, they are not.

Anyway, what would I like to see in a language that replaces JavaScript? I've been thinking about this for a while, and here is a little list of what I, personally, would love to see:

  1. Good performance. If possible the language should be as fast as C++ or Java. I don't expect a language to be as fast as C, but I think C++/Java-like performance is a reasonable thing to ask for.
  2. Static typing. I am not a big fan of dynamic typing like what we have in JavaScript, and this is one of the reasons why I use TypeScript instead. I like static typing because I can have type-checking at compile time, as well as self-documenting code. It also reduces the amount of testing required, since I don't need to check if someone is sending me a string argument instead of an object, which happens in JS.
  3. A great package manager. What I love about node is NPM is not having to worry about dependency conflicts and being able to ask for any package version I need.
  4. Garbage collection. As much as I love C, I really don't feel like debugging memory leaks - it is error prone, and a team's time can be better spent elsewhere.
  5. Maturity. A good community and wide adoption is obviously very important for any language nowadays.
  6. Multi-paradigm. I find it hard to believe that we still have pure OOP languages nowadays. I don't think languages should be all functional either, but having a balance between functional, imperative, and OOP is a great thing, and kudos to JavaScript for being a good example at this (except for the "class" part). A few features I would love to have are If Expressions, Pattern Matching, and Currying, as well as some useful methods for lists such as map, reduce, filter... You get the idea.
  7. Easy to set up. Spending 30 minutes just to set up a build environment is absolutely miserable. I appreciate the simplicity of just running npm install and having a package.json that describes my whole project in 5 seconds.
  8. Good tools for parsing, sanitizing, and validating JSON. JSON is probably the internet's favourite data-interchange format nowadays, and having built-in capabilities to parse/stringify JSON into/from objects or interfaces is fantastic, especially if supports unstructured data. The availability of libraries to sanitize this data and then validate it is also essential.
  9. Simple error handling. Try/Catch reminds me a lot of go-tos. Sure, go-tos are very useful if you are doing some very, very low-level programming, but they are considered bad practice if you are not writing a kernel. I think Try/Catch should have the same fate as go-tos, and kudos to Go for taking the initiative.
  10. Asynchronous/coroutines. If working on single-core applications (like a microservice, for example), asynchronous execution is extremely important to have a decent performance when handling requests.
  11. No OOP bloat. Objects are nice, but OOP is dangerous. I could have an entire post talking about this, but I will leave you with this gem instead.
The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

C++ and Java type systems are extremely bloated and complicated. I do not miss the old days when I was fooling myself with "I love programming in Java!".

  1. Simplicity. No, I don't want five ways to do the same thing. I don't want the choice of using classes, prototypal inheritance, factory functions, etc. I want a standard way that will be used by everyone.
  2. Explicit null values. Being able to assign null to an integer is terrible, but if the language makes it explicit that a value can be null (for example, by appending a ? to the type of the variable), then it is not so bad. I would say this is a must-have feature nowadays.

From what I've seen, Go and Rust are interesting candidates, although Go seems to lack in the functional area, while Rust lacks maturity. Well, maybe one day!

by Henrique at July 14, 2018 07:17 PM

July 10, 2018


Arsalan Khalid

Your article assumes a lot of things, you’re discussing things as if you’ve seen such a system…

Your article assumes a lot of things, you’re discussing things as if you’ve seen such a system rolled out, that you can assume greater resiliency, and somehow its ‘cheaper’. That was the most shocking of all, how have you measured that? Right now it seems the shear cloud costs with running a distributed and “decentralized” system amongst all of the node owners, especially amongst financial institutions with the ASX example you gave. What ASX built so far is proof of concepts wrapped on top of each other.

by Arsalan Khalid at July 10, 2018 05:10 PM

July 04, 2018


Andrew Smith

Getting kicked out of a store kind of sucks

First time for everything I guess. I’m still a little shocked that it happened. I just got kicked out of a store because… I had children with me.

That’s right. I, a full-size male, with disposable money to spend on things that might be nicer than I really need, got kicked out of an office furniture store: the Workspace Group Inc on 248 Bridgeland Ave in Toronto.

I’ve been looking for a replacement for my old Staples chair that’s been losing its skin for a couple of years (maybe longer). I decided that instead of spending two-three hundred dollars on another Staples piece of junk I’ll consider getting a properly designed, acclaimed ergonomic office chair such as a Herman Miller.

There aren’t many stores that sell that sort of thing. It’s quite expensive, can go over 1500$ plus tax. But I figured if I divide it by the decades it’s supposed to last – it won’t cost that much more and will give me a lot of comfort in the meantime. One store that was only about 15 minutes out of my way today was Workspace Group Inc. I checked their hours and went there to see.

They obviously sell mostly to businesses (that’s clear from their website) but they also explicitly sell to home offices and individuals.

As we arrived I parked the car right in front of their front door, which is on a glass wall so the people inside could see everything that was happening next.

I got three kids out of the car: an infant in a car seat, and two small children.

As I entered I got a weird look (suggesting “what are you doing here”) but I started off to make sure that I’m in the right place, and I asked whether they sell only to businesses. The guy most near me said mostly yes.

I didn’t get a chance to ask the second question, he immediately told me that it’s five o’clock and they’re closing. He forgot that there was a sign right on the door saying they closed in 30 minutes. As I was trying to figure out what he’s talking about he pointed at one of the chairs on the floor and said “that’s a three thousand dollar chair, it shouldn’t be wheeled all over the room”.

The back of my mind started getting the hint – he didn’t want my four year old wheeling a super expensive chair around the show room.. because it couldn’t handle the load? But I was still confused. My kids were very well behaved at this point (they were exhausted from earlier exercise), they weren’t dirty, they didn’t have any food or drink, and it wasn’t wet outside.

At this point he thought of asking what I’m looking for. I said I am looking for a good office chair. His eyes lit up for a moment, but he quickly pointed to someone’s desk and said he had the day off, then continued with a story about how sick he feels himself, and then basically I turned around and left.

On my way out he apologized for the inconvenience, and I said good bye, still not having understood (yes, I’m that slow).

So there you go. Clearly I got kicked out because the guy (who may have even been the owner) didn’t want kids in the showroom. It’s probably a personal difficulty for him, but I don’t think that really matters. I have a lot of personal difficulties with some people and I work with them anyway, making sure that we get the job done as well as possible. And none of those people end up giving me thousands of dollars.

It’s strange to me that someone so hip (this is no doubt a hipster office furniture store) yet well past his youth is so repulsed by children that he’ll kick a customer out of the store. I don’t feel it’s unfair, I feel store owners should be entitled to serve (and not serve) whomever they please and I’m definitely not going to make a fuss about it, but I just don’t get it.

by Andrew Smith at July 04, 2018 11:43 PM

June 21, 2018


Andrew Smith

Asunder in Chinese

Sometimes I forget how many people open source software reaches. I was reading through my web server’s log analyser results and noticed a weird URL as a source of some traffic. Here’s a screenshot of what I found there:

I don’t know whether it’s chinese or japanese or some other language, I just think this is so cool.

I wrote the software, a volunteer translated it into another language, and eventually someone wrote a review/tutorial in that language, which will drive even more users to the software.

I love open source. And one of the most amazing things is that it works despite so many reasons why it shouldn’t.

by Andrew Smith at June 21, 2018 01:05 PM

June 19, 2018


David Humphrey

Building Large Code on Travis CI

This week I was doing an experiment to see if I could automate a build step in a project I'm working on, which requires binary resources to be included in a web app.

I'm building a custom Linux kernel and bundling it with a root filesystem in order to embed it in the browser. To do this, I'm using a dockerized Buildroot build environment (I'll write about the details of this in a follow-up post). On my various computers, this takes anywhere from 15-25 minutes. Since my buildroot/kernel configs won't change very often, I wondered if I could move this to Travis and automate it away from our workflow?

Travis has no problem using docker, and as long as you can fit your build into the alloted 50 minute build timeout window, it should work. Let's do this!

First attempt

In the simplest case, doing a build like this would be as simple as:

sudo: required
services:
  - docker
...
before_script:
  - docker build -t buildroot .
  - docker run --rm -v $PWD/build:/build buildroot
...
deploy:
  # Deploy built binaries in /build along with other assets

This happily builds my docker buildroot image, and then starts the build within the container, logging everything as it goes. But once the log gets to 10,000 lines in length, Travis won't produce more output. You can still download the Raw Log as a file, so I wait a bit and then periodically download a snapshot of the log in order to check on the build's progress.

At a certain point the build is terminated: once the log file grows to 4M, Travis assumes that all the size is noise, for example, a command running in an infinite loop, and terminates the build with an error.

Second attempt

It's clear that I need to reduce the output of my build. This time I redirect build output to a log file, and then tell Travis to dump the tail-end of the log file in the case of a failed build. The after_failre and after_success build stage hooks are perfect for this.:

before_script:
  - docker build -t buildroot . > build.log 2>&1
  - docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

after_failure:
  # dump the last 2000 lines of our build, and hope the error is in that!
  - tail --lines=2000 build.log

after_success:
  # Log that the build worked, because we all need some good news
  - echo "Buildroot build succeeded, binary in ./build"

I'm pretty proud of this until it fails after 10 minutes of building with an error about Travis assuming the lack of log messages (which are all going to my build.log file) means my build has stalled and should be terminated. Turns out you must produce console output every 10 minutes to keep Travis builds alive.

Third attempt

Not only is this a common problem, Travis has a built-in solution in the form of travis_wait. Essentially, you can prefix your build command with travis_wait and it will tolerate there being no output for 20 minutes. Need more than 20, you can optionally pass it the number of minutes to wait before timing out. Let's try 30 minutes:

before_script:
  - docker build -t buildroot . > build.log 2>&1
  - travis_wait 30 docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

This builds perfectly...for 10 minutes. Then it dies with a timeout due to there being no console output. Some more research reveals that travis_wait doesn't play nicely with processes that fork or exec.

Fourth attempt

Lots of people suggest variations on the same theme: run a command that spins and periodically prints something to stdout, and have it fork your build process:

before_script:
  - docker build -t buildroot . > build.log 2>&1
  - while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
  - time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
  # Killing background sleep loop
  - kill %1

Here we log something at 5 minute intervals, while the build progresses in the background. When it's done, we kill the while loop. This works perfectly...until it hits the 50 minute barrier and gets killed by Traivs:

$ docker build -t buildroot . > build.log 2>&1
before_script
$ while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
$ time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
=====[ 495 seconds, buildroot still building... ]=====
=====[ 795 seconds, buildroot still building... ]=====
=====[ 1095 seconds, buildroot still building... ]=====
=====[ 1395 seconds, buildroot still building... ]=====
=====[ 1695 seconds, buildroot still building... ]=====
=====[ 1995 seconds, buildroot still building... ]=====
=====[ 2295 seconds, buildroot still building... ]=====
=====[ 2595 seconds, buildroot still building... ]=====
=====[ 2895 seconds, buildroot still building... ]=====
The job exceeded the maximum time limit for jobs, and has been terminated.

The build took over 48 minutes on the Travis builder, and combined with the time I'd already spent cloning, installing, etc. there isn't enough time to do what I'd hoped.

Part of me wonders whether I could hack something together that uses successive builds, Travis caches and move the build artifacts out of docker, such that I can do incremental builds and leverage ccache and the like. I'm sure someone has done it, and it's in a .travis.yml file in GitHub somewhere already. I leave this as an experiment for the reader.

I've got nothing but love for Travis and the incredible free service they offer open source projects. Every time I concoct some new use case, I find that they've added it or supported it all along. The Travis docs are incredible, and well worth your time if you want to push the service in interesting directions.

In this case I've hit a wall and will go another way. But I learned a bunch and in case it will help someone else, I leave it here for your CI needs.

by David Humphrey at June 19, 2018 02:45 PM

June 18, 2018


Arsalan Khalid

This worked for me, thanks for putting a post out there for it!

This worked for me, thanks for putting a post out there for it!
What does this addition even do exactly?

by Arsalan Khalid at June 18, 2018 11:29 AM

May 20, 2018


Michael Kavidas

SPO600 FINAL PROJECT

For my final project in SPO600 I was tasked with doing optimization in an open source project. My project choice was FFMPEG. This blog post will outline my journey and what I’ve taken away from it so far.

Step 1 – Finding what to optimize:

FFMPEG is a massive project so if I was going to optimize it I would have to narrow down a function that could use optimization. Rather than go it alone, I reached out to the community for some advice. At first what I was looking for was functions already optimized for X86_64 that I could port over to AArch64. A helpful member pointed me to a file that deals with decoding OPUS and some AAC samples. This file has a version already optimized for X86_64 assembly.

Step 2 – Making sense of the code/ narrowing down a function to optimize

While sifting through the assembly code I had a hard time understanding what was going on. My professor suggested I focus on the C code and work from there. The file has roughly 400 lines of code and has 9 functions so my next step was to find out where I should focus my optimization. FFMPEG has a built in timer function to help facilitate benchmarking the timer estimates the cycles that a given block of code takes to run (more useful than just time and less variance between runs). I used this function to benchmark the functions that I felt I could optimize. Eventually I narrowed my focus to this function: origFunction

As you can see the function in question does some simple arithmetic on floating point values. When benchmarked I can see that the function gets hit fairly heavily during decoding:

benchmark.png

Step 3 – Writing my optimization

My idea for optimizing this function was to use SIMD instructions to do the arithmetic in parallel. I chose to write my optimization with NEON intrinsics because it would provide easier readability and would takes less lines of code to get the same job done. Here is the resulting code:

optfunc

Step 4 – Further benchmarking and test

When benchmarking my optimization I was disappointed to see that it was a lot slower than the original.

graph

When looking into the disassembly of this block of code I can see that the loading and storing of values is taking up much more time than the arithmetic:

dissassemble

The original disassembly:

origdissassemble.png

Conclusion:

I believe that the function I chose cannot be optimized this way because the operations needed to load the values in the vector registers take up more cycles than the operations saved by doing the arithmetic in parallel. Overall I learned a lot about how CPUs work and the many ways a program can be optimized. As a programmer this course and assignment have really taught me to think about the code I am writing and how the compiler will interpret it into machine code.

by mkavidas at May 20, 2018 09:39 PM

May 19, 2018


Mat Babol

DPS911 - Release 1

About two weeks ago, I started a DPS911 Open Source Projects class, which is essentially a continuation of the previous DPS909 Topics in Open Source Development class that I took last year. The class size is much smaller, we have 4 students compared to the 30+ we had last time, which is nice the professor can spend more time with us as opposed to the entire 30+ class of students.

For this class, we are starting a project that my professor has envisioned, for now we are calling it unbundled. This project is meant to recreate an operating system for web development in a browser environment. The idea is to have features such as accessing directory of files, a code editor, command line terminal, sharing files, and more, be available in the browser for use on any operating system. The project isn't re-inventing the wheel, the technology is already there, we are just putting everything together. Brackets, for example, will be used for the code editor, while webtorrent will be used for file sharing.

Docusaurus


For my first release, I took on issue #12, which was to create the docusaurus for the project. Docusaurus is a tool developed by Facebook to make it easy for teams to publish documentation websites without having to worry about the infrastructure and design details. The site contents are written in simple markdown code, and docusaurus generates a high quality website.



I created the first version of the files and put a pull request in. The PR did not initially get accepted. There was a few bugs, the color theme that I chose wasn't the best, and my some of my instructions weren't clear. I fixed all the problems and create a new PR which was then accepted.

I've learned a lot about git that I previously didn't know. I'm still relatively new to Git, so things like rebasing or pulling from upstream were all new concepts to me. I already feel like my Git skills are expanding.

The docusaurus site looks and feels much better after the improvements that I made. I changed the theme entirely, named the files correctly, and fixed a few other minor changes.




The site works correctly locally, however when up on gh-pages, there is a few resources missing. A few images and the main.css files cannot be reached. After a quick look, the files themselves are not missing, so it seems to be a linked error. I'll look into fixing this issue then I'll create a new PR.



New release


For my next release, I'll be working on sharing files using webtorrent. WebTorrent is a streaming torrent client for node.js and the browser. I've briefly looked into this and got parts of it working, however this week I will dive deeper into this. Stay tuned for my progress.


by Mat Babol (noreply@blogger.com) at May 19, 2018 09:25 PM

May 15, 2018


Fateh Sandhu

ServiceWorkers and xterm.js integration

ServiceWorkers

ServiceWorkers are javascript scripts that run on the local machines locally and it communicates with the webpage using postMessage. They run in the background after they have been registered and the browser has installed them on the machine. They can route the network request made by the page. Since they allow you to control the network communications, they can be helpful in making the page even more customized to the requirements of that particular web app. For example, if you want to make an app that can be run offline or needs to be able to run smoothly regardless of the network quality service workers can use the data stored locally in caches to speed up the loading process for data.

How they work

First we setup a basic html page that has the link to the javascript script that will register and install the service worker using a promise.

Screen Shot 2018-05-14 at 8.57.19 PM.png

Then to install the service worker you register it with the file as the parameter.Screen Shot 2018-05-14 at 8.56.53 PM.png

Once it has installed you can check to make sure that the appropriate page and request is being addressed. All of the files that will be cached and the service worker should be installed successfully. We make sure that the request received is valid and then respond with the correct response.

 

Screen Shot 2018-05-14 at 8.57.08 PM.png

 

Xterm.js

Xterm is a typescript written library that enables apps to run a terminal independently. We will be leveraging this library to provide a fully functional terminal in our web app. Xterm passes the events to the backend and provides the response on the screen.

by firefoxmacblog at May 15, 2018 01:25 AM

May 11, 2018


Michael Kavidas

OSD600 My First PR, VSCODE Hacking

For my second part of my final project I was tasked with fixing a bug in an open source project. I decided to work in VSCode because of its large community and it seems very open to newbie developers.

Finding a Bug:

Finding a bug was pretty straight forward using the issue tracker on Github. At first I tried to fix a bug that was much too complicated for my first pull request. After looking again I found a feature request that would be easy for me to do and would provide me some experience with the workflow/ a deeper understanding on how VSCode works.

The Feature:

The feature I was assigned was to add the option to have the active border tab border be positioned at the top of the tab instead of the bottom (the default position). This would allow users/devs more flexibility in making custom themes.

Figuring out the way it works/ Debugging:

The first thing I did was ask for some guidance when I asked to be assigned to the bug. This allowed me to get a helpful jumping off point and narrowed down my search. I was pointed to a file where the color for the border is created. This file is used by themes to set the color scheme of VSCode. I then changed the default color so I could observe the change it has in the editor. After this I still needed to find out where the code that sets the position of the border and how it works. To figure this out I did two things: I first started the debugger and inspected the tab bar, I then searched the code to find where the “TAB_ACTIVE_BORDER” presets were being used. When using the debugger I found out that the border was a boxShadow. Searching the code I found the file that creates the aforementioned boxShadow and sets its position.

Adding the Feature:

To add the feature I:

  1. Added new color definitions for the top of the tab and called them “TAB_ACTIVE_BORDER_TOP” and “TAB_UNFOCUSED_ACTIVE_BORDER_TOP
  2. I added some logic that checks which position is defined and to change the position/color depending on which one is defined.

This allows theme developers to choose to either include a top border or bottom border for their TABS. I then submitted my PR and got a response asking me to make some changes. After making the necessary changes my code was accepted.

Conclusion:

This project has been the first time I have contributed to a large code base. Usually in school you work on code written by yourself, giving you a full understanding of the code. Working on a large project like VSCode can be intimidating because of its unfamiliarity and how large the code base is. That being said, this project taught me that although contributing may seem intimidating at first, once you dive in and start playing around it can be surprisingly easy to get involved and if you ever get stuck there is a community of people willing to help. In addition to this, the fact that the couple lines of code that I wrote will be run on millions of computers is a profound realization. In summation, this has been a very rewarding journey so far and I am excited to continue.

by mkavidas at May 11, 2018 09:38 PM


Ray Gervais

Closing Two Weeks Completed of the 100 Days of Code Challenge

After The First Week Was Completed

Forest with Road Down Middle

Wow, how quickly two weeks are passing by while you’re busy enjoying every hour you can with code, technology, people, and for once, the weather. I’m even more surprised to see that I was able to maintain a small git commit streak (10 days, which was cut yesterday, more on that below) which is damn incredible considering that I spent 90% of my time outside of work away from a keyboard. I told myself that I would try my hardest to still learn and implement what I could while travelling, opting to go deep into the documentation (which I will include from what I can put from the various Git commits and search history below) and learning what it means to write Pythonic code. Still, progress and lines of code is better than none whatsoever. One helpful fact which made learning easier was my dedication to only learning Python 3.6, which removes a lot of 2.1 related spec and documentation. This allowed me to maintain an easier to target breadth of documents and information while travelling.

Jumping into Different Lanes

More so, I found myself trapped in an interesting predicament which I put myself in for the first week. Not knowing where to start, or how much time online challenges would take in the later hours, I opted to decide just as I walked toward the keyboard ‘What am I building today?’. This means that everyday of the challenge, I’ve walked in on a blank canvas thinking ‘Do I want to play with an API, learn how to read the file system? etc.’ This has been a zig-zag way of exposing myself to various scopes and processes which Python is capable of. I love the challenge, but I also fear the direction would lead me towards a rocky foundation of niche exercises, pick-and-choose projects and an understanding limited in scope. Learning how to to make API requests with the Requests module was a great introduction to PIP, pipenv, and 3rd party modules. Likewise dictating the scope of what I want to learn that day made each challenge a great mix of new, old, and reinforcing of a different scope compared to yesterday.

For the second week, I wanted to try some coding challenges found online such as HackerRanks (Thanks Margaryta for sharing), FreeCodeAcademy’s Front-End, Back-End, and Data Science courses, and SoloLearn challenges on mobile. Curious of the output and differences between my previous and current week’s goals, I came to the following thoughts after becoming a 3 star Python Developer on Hacker Rank (an hour or so per day this week’s worth):

  • Preset Challenges are better thought out, designed to target specific scopes instead of a hodge-podge concept.
  • You can rate them based on difficulty, meaning that you’re able to gauge and understand your current standing with a language.
  • It’s fun to take someones challenge, and see how you’d accomplish it. There’s many times where I saw solutions posted on forums (after researching how to do N) which I thought I’d never had brainstormed, were too verbose, were well beyond my understanding, or too simple or stagnated where the logic could be summed up in a cleaner chained solution.

Experience So Far

Whereas I fretted and stressed over time and deadlines, this challenge’s culture advocates for progress over completion. I still opt for completion, but knowing that code is code, instead of grades being grades is a relieving change of pace which also makes the approach and implementation much more fun. I’ve opted for the weekends to be slightly more relaxed, not heavily focused on code and more and concept and ideals (perhaps due to my constant traveling?), which also makes my weekday related challenges fantastic stepping stones which play with the weekend’s research.

Learning Python has never been an item high up on my priorities, and only through David Humphrey’s persuasion did I add it to the top of my list -knowing that it would benefit quite a bit of my workflow in the future-, and opt to learn it at the start of the challenge. From the perspective of someone who’s background in the past two years revolved around CSS, JS, and Java, Python is a beautifully simple and fun language to learn.

Simple yet powerful, minimalistic yet full-featured. I love the paradox and contradictions which are produced simply by describing it alone. The syntax reminds me quite a bit of newer Swift syntax, which also makes the relation easier to memorize. I also gather that from an outsider’s perspective, that the challenge also shows growth in the developer (regardless of how they opt to do the challenge) through the body and quality of work they produce throughout the span of the marathon.

An interesting tidbit, is that I’ve noticed my typical note taking fashion is very Pythonic in formatting / styling, and you can ask my peers / friends who’ve seen my notes. It’s been like this since High school with only subtle changes throughout the years. Coincidence? Have I found the language which resonates with my inner processes? In all seriousness I just found it hilarious how often I’d start to write python syntax in Markdown files, or even Ruby files yet, when writing my own notes the distinction was minimal.

What About The Commit Streak?

Forest with Road Down Middle

Honestly, the perfectionist in me; one quick to challenge itself where possible was the most anxious about losing the streak, especially since as a developer it seemed to me as one way to boast and measure your value. I enjoyed maintaining the streak, but I also had to be honest with my current priorities and time to myself. Quite frankly, it’s not healthy to lose an hour sleep to produce a measure of code you can check in just for a green square when you’ve already spent a good few hours reading Bytes of Python on the subway for example, or devoted time to learning more through YouTube tutorials on your lunch break. I thought that I’d use GitHub and commits as a way of keeping honest with myself and my peers, but after reading quite a few different experiences and post-200 days types of blogs, I’m starting to see why most advocate for Twitter as their logging platform. Green squares are beautiful, but they are only so tangible.

Whereas I can promise that I learned something while traveling, perhaps using SoloLearn to complete challenges, I cannot easily port over this experience and visual results to Git to validate progress. I suppose that is where Twitter was accepted as the standard, since it’s community is vastly more accessible and also accepting that not everything is quantifiable through Python files. Instead, saying that you read this, did that, learned this, and experimented with that is as equally accepted as day-12-hacker-rank-challenges-04.py with it’s 100+ line count.

This doesn’t mean that I’m going to stop commiting to GitHub for the challenge, or that I’ll stop trying to maintain a commit streak either; it simply means that I can accept it being broken by a day where I cannot be at my computer within reasonable time. It won’t bother me to have a gap between the squares once in a while.

I’ve seen friends enjoying the challenge for similar and vastly differences too, and I highly recommend giving it a try for those who are still hesitant.

by RayGervais at May 11, 2018 09:34 PM

May 01, 2018


Henrique Coelho

Continuous Integration with TypeScript + Mocha + Istanbul (NYC) + CircleCI

Writing unit and integration tests is the bane of my existence. The sheer amount of boredom produced by this practice would easily make me rich if I somehow were paid to get bored. I would love to meet someone who genuinely enjoys writing tests for a hobby so I would allow them to write all my tests for free, although my self-preservation instinct tells me that such person cannot be trusted and will eventually try to stab me with a fish or some unusual object that will make people chuckle when they read the news.

Anyway, writing unit tests is torture, but it has to be done. Other things that should be done, on top of writing unit tests is:

  1. Check the coverage of these tests to make sure you did not miss any lines, branches, functions, files, etc
  2. Continuously test the code pushed into a repository with a continuous integration system. This way, we can easily know if the tests are broken for a pull request

This post will be about joining TypeScript (programming language) with Mocha (test framework), Istanbul (code coverage), and CircleCI (continuous integration).

I created a simple TypeScript project with the following structure (the files are all empty for now, except for package.json, which contains the initial code from npm):

.
|-- .circleci
|   |-- config.yml
|-- dist
|-- package.json
|-- src
|   |-- print.ts
|   `-- transform.ts
|-- test
|   |-- mocha.opts
|   `-- unit
|       `-- transform.test.ts
`-- tsconfig.json

First, I made a tsconfig.json to configure how TypeScript will be compiled:

{
  "compilerOptions": {
    "module": "commonjs",
    "removeComments": false,
    "sourceMap": true,
    "baseUrl": "types",
    "typeRoots": ["node_modules/@types"],
    "target": "es6",
    "lib": ["es2016", "dom"],
    "rootDir": "src",
    "outDir": "dist",
    "types": [
      "mocha"
    ]
  },
  "include": [
    "src"
  ]
}

The "removeComments": false is very important. We will see why later!

I also made a little script to compile the TypeScript code in the package.json file:

"compile": "./node_modules/.bin/tsc"

Let's start with print.ts and transform.ts:

// print.ts
// This is just a dummy function. We won't do anything interesting with it
export function print(v: any) {
    console.log(v);
}
// transform.ts
// This extremely over-complicated function will receive an array of numbers
// and return 0 if the sum of the numbers is 0, 1 if the sum is > 0, and -1
// if the sum is -1
// I made it complicated so we will have lots of branches to test
export function transform(input: number[]): number {
    if (!input || input.constructor !== Array)
        throw new Error('Input must be an array of numbers!');

    try {
        const total = input.reduce((acc: number, n: number) => acc + n, 0);

        if (total === 0) {
            console.log('The input is equal to zero');
            return 0;
        } else if (total > 0) {
            console.log('The input is greater than zero');
            return 1;
        } else {
            console.log('The input is greater than zero');
            return -1;
        }
    } catch (e) {
        console.error(`Unknown error occurred: ${e}`);
        return 0;
    }
};

Alright. We have the code, now we need to make unit tests for it!

First, I will install the following packages:

  • chai - Has useful tools that will make asserting the results easier
  • mocha - Our test framework
  • @types/chai - TypeScript types for the chai module
  • @types/mocha - TypeScript types for the mocha module

And now I am going to write the test cases for transform.ts:

import { transform } from '../../src/transform';
import { expect } from 'chai';

describe('transform', () => {

    it('should fail if non-array is passed', () => {
        expect(() => transform('Bad input!' as any)).to.throw();
    });

    it('should return 0', () => {
        const result = transform([1, -1, 2, -2]);
        expect(result).to.eql(0);
    });

    it('should return 1', () => {
        const result = transform([1, -1, 2, -2, 3]);
        expect(result).to.eql(1);
    });

    it('should return -1', () => {
        const result = transform([1, -1, 2, -2, -3]);
        expect(result).to.eql(-1);
    });

});

Perfect! We have the test cases done.

Now, here is one problem: should we compile the tests? They are written in TypeScript, so they should be compiled, right? Well, you don't have to. Luckily, ts-node is here to help! Ts-node is a TypeScript interpreter! Although I would not recommend actually using it to run the main script, it is great for running the test cases!

First, installing the packages we need:

  • source-map-support
  • typescript
  • ts-node

Now let's configure mocha to use ts-node:

# test/mocha.opts
--require ./node_modules/ts-node/register
--require ./node_modules/source-map-support/register
--recursive
--exit

Here is what these lines mean:

  • require ./node_modules/ts-node/register - Here we are telling Mocha to use ts-node as the interpreter
  • require ./node_modules/source-map-support/register - Support for source maps. Will be useful later with Istanbul
  • --recursive - Test all the files in the directory, not individual files
  • --exit - Force exit after the tests are done (will kill any pending promises)

And let's make an NPM script to run the tests (files that end in .test.ts) in package.json:

...
  "scripts": {
    "test": "./node_modules/.bin/mocha test/**/*.test.ts",
    "compile": "./node_modules/.bin/tsc"
  },
...

That's it. Whenever we run npm test, mocha will run all the tests for us. Let's try it:

  transform
    ✓ should fail if non-array is passed
The input is equal to zero
    ✓ should return 0
The input is greater than zero
    ✓ should return 1
The input is greater than zero
    ✓ should return -1


  4 passing (7ms)

But that's not all! Writing tests is not torture enough - we need to make sure we write enough tests to cover all our code. This is what code coverage does.

Istanbul (also known as NYC - I actually don't get why the two names) will make this very easy. I will install the following packages:

  • nyc

Easy. Now we can modify the test script so Istanbul will check our code coverage:

...
  "scripts": {
    "test": "./node_modules/.bin/nyc ./node_modules/.bin/mocha test/**/*.test.ts",
    "coverage": "./node_modules/.bin/nyc report",
    "compile": "./node_modules/.bin/tsc"
  },
...

Whenever we run the tests, we will get the coverage for our files. I also added a separate script (coverage) for when we just want to see the coverage, without running the tests again.

I will also add some settings for Istanbul in the package.json file:

...
  "nyc": {
    "extension": [ // <- Extensions to be covered
      ".ts"
    ],
    "include": [ // <- Which directories should be covered?
      "src"
    ],
    "reporter": [ // <- Reporters used *1
      "text",
      "html"
    ],
    "all": true, // <- Check all files? *2
    "check-coverage": true, // <- Enforce a coverage threshold?
    "statements": 90, // <- Minimum coverage for statements (%)
    "functions": 90, // <- Minimum coverage for functions (%)
    "branches": 90, // <- Minimum coverage for branches (%)
    "lines": 90 // <- Minimum coverage for lines (%)
  },
...
  1. Reporters are how the coverage is reported to us. In this case, I am asking for two types of reports: text in the terminal, and html files (useful for CircleCI)
  2. If all is set to false, it will only check the coverage of the files used by the test files. If you have a file that was not tested at all, it will not show up in the reports.

Let's take a look at the output of npm test:

ERROR: Coverage for lines (81.25%) does not meet global threshold (90%)
ERROR: Coverage for functions (66.67%) does not meet global threshold (90%)
ERROR: Coverage for statements (82.35%) does not meet global threshold (90%)
--------------|----------|----------|----------|----------|-------------------|
File          |  % Stmts | % Branch |  % Funcs |  % Lines | Uncovered Line #s |
--------------|----------|----------|----------|----------|-------------------|
All files     |    82.35 |      100 |    66.67 |    81.25 |                   |
 print.ts     |        0 |      100 |        0 |        0 |                 2 |
 transform.ts |     87.5 |      100 |      100 |    86.67 |             19,20 |
--------------|----------|----------|----------|----------|-------------------|

Cool! But there is a problem there: we still haven't fully tested transform.ts:

    } catch (e) {
        console.error(`Unknown error occurred: ${e}`);
        return 0;
    }

I put that catch there as an example of something I can't really test. Nothing will throw an error there, but sometimes we are using something that can fail under circumstances out of our control, and they are failures that we can not reproduce.

What can we do then? We can tell Istanbul to ignore lines, like this:

    } catch (e) {
        /* istanbul ignore next */
        console.error(`Unknown error occurred: ${e}`);
        /* istanbul ignore next */
        return 0;
    }

This will only work if "removeComments": false is set in tsconfig.json, otherwise the compiler will remove the comment.

Let's try it now:

ERROR: Coverage for lines (85.71%) does not meet global threshold (90%)
ERROR: Coverage for functions (50%) does not meet global threshold (90%)
ERROR: Coverage for statements (85.71%) does not meet global threshold (90%)
--------------|----------|----------|----------|----------|-------------------|
File          |  % Stmts | % Branch |  % Funcs |  % Lines | Uncovered Line #s |
--------------|----------|----------|----------|----------|-------------------|
All files     |    85.71 |      100 |       50 |    85.71 |                   |
 print.ts     |        0 |      100 |        0 |        0 |                 2 |
 transform.ts |      100 |      100 |      100 |      100 |                   |
--------------|----------|----------|----------|----------|-------------------|

Sweet!

I won't bother making the test case for print.ts because that file was there only to show you what "all": true does: even if we are not testing that file, it will show up in the coverage report! Let's just jump into integration with CircleCI.

CircleCI is very easy to set up. Most of the times, continuous integration systems have their own separate environment (such as a container), which bad news for people who can't get their code running even on their own machine. CircleCI is no exception. All we need to do is describe how the environment should be and how to run our tests (find more information here).

Here is my ./circleci/config.yml that describes how to run my tests:

version: 2
jobs:
  build:
    working_directory: ~/app
    docker:
      - image: circleci/node:10.0.0
    steps:
      - checkout

      - run:
          name: Installing packages
          command: npm install

      - save_cache:
          key: dependency-cache-{{ checksum "package.json" }}
          paths:
            - ./node_modules

      - run:
          name: Running tests
          command: npm test

      - store_artifacts:
          path: coverage
          prefix: coverage

In this case, I am asking for a container with Node 10.0. Then I follow these steps:

  1. Install my npm packages
  2. Cache my npm packages (will make the jobs a lot faster)
  3. Run the tests
  4. Save the html files with the coverage (remember the html reporter?) as an artifact, which we can access after the tests are done

As long as our project is set up on CircleCI, it will test anything we push into our repository.

All done!

Repository with the code

by Henrique at May 01, 2018 01:15 AM

April 30, 2018


Ray Gervais

An Introduction to The 100 Days of Code

The day has finally come, the start of the much discussed 100 days of code! The official website can be found here: 100daysofcode.com, which explains the methodologies and why(s) of the challenge. I decided that it would be the best way to start learning new languages and concepts that I’ve always wanted to have experience in, such as Python, Swift, Rust, and GoLang. The first and primary scope is to learn Python, and have a comfort with the language similar to how I do with C and C++.

Expectations & Challenges

I’m not nervous at all with the idea of learning Python, but I’m concerned with being able to do an hour of personal programming daily at a consistent rate. Being realistic, right now I still spend three hours commuting on bus and trains, crowed to the degree where it’s not viable to even program on a Tablet or Netbook. These coding hours I imagine will be affiliated with the later hours, since I am no morning person.

I also expect to become rather well acquainted with Python 3 within a week or few, and have begun thinking of ways to further my development with the language by using or contributing to python projects such as Django, Home-Assistant, Pelican, and Beets for example. This will vary or expand as we get further into the process.

Once content, I want to move to Swift and relearn what I had previous did in the Seneca iOS Course, attempting to further my understanding and building applications in the same time. I think the end result being a iOS application with a Python back end would be a beautiful ending, don’t you agree? We’ll see.

Here We Go

I cannot say that I will blog everyday for the challenge, but instead will try my hardest to keep those interested through my twitter handle @GervaisRay. Furthermore, you can keep track of my progress here where I’ll attempt to update the week’s README with relevant context and thoughts.

This will be fun, and I can’t wait to see how I, and my peers do throughout the challenge.

by RayGervais at April 30, 2018 11:55 PM

April 29, 2018


Aleksey Glazkov

DPS909 – Lab 3

For this lab I decided to work on issue #42720, “Color picker: no longer appears in settings editor”. It is not very serious bug, however behavior of this color picker is not user-friendly. After using tons of program, I can definitely say, that if I want to extend color picker I need to hover on that small red square, but in VSCode it only works when you hover on the text.

color_picker_bug

Just a simple search in VSCode source files led me to the file that handles color picker’s behavior – ‘colorPickerWidget.ts’. There are classes ColorPickerHeader that renders that small square with selected color and ColorPickerBody that renders color picker itself.

With the help of debugger I found a couple of listeners, however they were set to listen for clicks on the label, but color picker is shows when label is hovered. My guess is that this line of code registres listener that I’m looking for:

this._register(model.onDidChangePresentation(this.onDidChangePresentation, this));

 

 

Right now I’m not able to fix this bug, however after looking through the source code I got a basic idea of what is going on there. Every time I hover over that label, signal emits that is accepted by listeners and color picker is displayed.

by alexglazkov at April 29, 2018 03:47 AM

DPS909 – Lab 6

This blog post will cover how different browsers handle input provided to address bar.

Brave in comparison to other browsers, such as Chrome and Firefox, couldn’t handle links containing spaces, e.g. “https://www.google.ca/search?q=dog cat”. It could trim links, but always left all the white spaces inside.

  • Brave couldn’t open files with whitespaces in path.
  • Chrome can open files with white spaces in path and replaces them with ‘%20’.
  • Firefox also can open files with whitespaces in path but it does not replace them with ‘%20’. They handle URLs similarly.

Writing tests for Brave is not very difficult. I wrote some tests for getUrlFromInput functions. What I had to do is to provide some ill-formatted input and check if its equal to expcted result. Here is one example:

'calls url with leading and trailing whitespaces': (test) => {

test.equal(urlUtil().getUrlFromInput(' https://www.google.ca/search?q=dog cat '), 'https://www.google.ca/search?q=dog%20cat')

}

What is similar in browsers implementation?

All of the browsers use a set of functions to prepare input before final validation. Brave relies on regular expressions mostly, while Chromium analyzes input step by step in different functions. Looks like Mozilla uses somewhat mixed approach.

In Brave, to cover edge cases provided to us, all I had to do is to replace all whitespaces with ‘%20’ symbols as Brave can handle links and paths with ‘%20’ well enough.

Doing this lab, I learned how Brave, Chrome and Firefox handle input from the address bar, how they parse it and decide what to do next with it.

 

by alexglazkov at April 29, 2018 02:32 AM

April 28, 2018


Aleksey Glazkov

DPS909 – Release 0.3

Hi there!

In this blogpost I will tell you about my experience working on Release 0.3 for my Open Source Development course.

This release was really challenging for me. I tried to fix different bugs in VSCode and Brave.

VSCode

First of all I started working on issue #48103, “Saving workspace names with dot (.) removes the last dot”. However, I quickly faced a problem that none of the debuggers stopped at breakpoints placed in file named workspacesMainServices.ts where all the code handling workspace saving is located.

disabled_breakpoint

Unverified breakpoint, Breakpoint ignored because generated code not found(source map problem)”. The error tells me that there is a problem with “source map”, however I tried enabling it in launch configuration file and tweaked some other settings. After some time researching this error on the web I tried a bunch of solutions, but nothing helped. I decided to move on.


Next issue I picked up was #48875 “Cmd+Click URL Containing Comma in Integrated Terminal doesn’t follow full URL”.

url_with_comma

That sounds interesting. I quickly found files that handle links in terminal and started digging in. I figured out that VSCode uses regular expressions to find links within text. There is definitely problem with regular expression. I tested in on 3rd party resources and it didn’t work as expected.

test_vscode_regexpVSCode regular expression
3rd_party_regexpAnother regular expression that I found on the web

Time to research again. This time I found out that the issue I was working on is actually a duplicate of already existing issue and the problem was not with VSCode but with one of the dependencies it uses, in this case xterm.js and some contributors are already working on that issue. The good thing is(well… good thing for me) that I found another bug, while I was working on getting links with commas work properly in terminal.

bug_vscode

As you can see on this screenshot, the link’s tooltip is partially cut off when link is located on 1st or 2nd lines of the terminal. I have never submitted any issues before and decided that it would be very useful experience for me. I looked through issues and couldn’t find anything similar. Actually, I found one closed issue with merged pull request, but it seemed that it was not fixed. I filled my first issue ever. I discovered a reporting tool integrated in VSCode, that helps generate issues. As soon as issue was created, again, I started digging in. After playing a bit with breakpoints I found piece of code that handles rendering of that tooltip. I played around with css properties and I guessed a perfect value that displayed the tooltip properly(later I confirmed it in one of the css files). Basically, I had to found the height of the tooltip box and always set CSS bottom property not more than container height – tooltip box height.

tooltip fixedFixed tooltip

Conclusion

Despite the fact that I have not submitted much of code for this release, I really got a lot out of it. My growth goal was to learn how different services work inside VSCode and I decided to do it by fixing different small bugs. At the end, I fixed only one, but the experience I got while working on others is very valuable.

 

Brave

Just a quick overview of my experince with Brave.

My first experience with working on Brave started with issue #8635, “All preferences options (left pane) should be accessible on a small window”. It was well discussed there and I saw some solutions there, however there was no pull requests, so I decided to give it a shot.

Description:

When you view preferences in small window you can’t see some of the options.

brave_okBrave in a big window
brave_bugBrave in a small window. Some options are inaccessible

Solution:

Added some css properties, got the following.

brave_fixedBrave in a small window “fixed”

Not the best result apparently.

by alexglazkov at April 28, 2018 11:11 PM

April 25, 2018


Aliaksandr Ushakou

Dark Mode Feature Request

My task for today is implementing the dark mode feature for an open source project called bridge-troll. All information about this feature request can be found here. In summary, the user interface is very bright.

troll1

This UI looks nice and comfortable during the daytime; however, after sundown, this white palette does not fit under the night entourage, and the eyes start to get tired.

The essence of this feature request is not just to change the color scheme, but to make the UI color theme automatically change depending on the time of the day.

An automatic color theme switch is not so difficult to implement. But, there is one obstacle. Thanks to globalization and the Internet (hmm.. the impact of the Internet on globalization is tremendous), any software can be accessed from all around the world.

An interesting fact:
In countries with strict censorship, where websites and apps are blocked indiscriminately, the digital literacy is growing. People have to learn how to bypass prohibitions and set up a VPN. The recent blocking of the Telegram Messenger in Russia is a very good example of it. Russian government tried to block Telegram, but failed due to a number of reasons. People began to spread information about how to bypass the blocking using the VPN. This incident increased not only the digital literacy of people, but also the popularity of the Telegram due to the word-of-mouth effect.

Ok, back to the topic. This web app can be accessed from all around the world, so the color mode should switch at the right time regardless of the user location. There are tons of open source JavaScript libraries for all kinds of occasions. The library that I’ve used is SunCalc. SunCalc is a JavaScript library for calculating sun position, sunlight phases, moon position, and more.

The next step is to find a map in dark colors. Fortunately, this project uses Leaflet. Leaflet is the leading open-source JavaScript library for mobile-friendly interactive maps. Leaflet has a huge selection of different maps for every taste.

Just look at this Mordor like map. Awesome!

troll3

The map that I chose for the dark mode looks like this:

troll4

Implementation of the automatic color mode switch took some time. The hardest part was to switch all the icons. The final result looks like this:

troll-gif

That’s all! Thanks for reading and have a nice one!

by aushakou at April 25, 2018 03:16 AM

April 24, 2018


Arsalan Khalid

What does it mean to support? An open source initiative 0.4

I originally set out to offer a lot more code contributions to the Brave browser for my efforts of the open source course I’m finishing right now. However, I learned the hard way, that this isn’t as easy to do as one would expect. It is indeed easy to get involved with a project, be a contributor, and support the initiative, but it is even more challenging to support the project in an integral way. This has been a humbling journey, where I’ve learned not to assume that my technical mind can do something, but instead — I need to put in the work, and the consistent set of development time to become better. Believe me, this is a hard thing to even admit to myself, because I know I have skills in areas that are unmatched at my level, but I know I have skills that could be a lot stronger in my development background. So you just have to keep moving forward, learn from the mistakes, the failures, and most importantly the critical criticism. Which at times can be tough, because people don’t mean to take an aim at you personally, but just that you aren’t delivering to the standard that is expected of you. It’s a tough pill to swallow, but like Neo — you have to just do it, and come to your own.

For this PR, I wanted to identify some contributions which aren’t in the form of code, but with documentation, and general project pm support. In doing so, I’ve helped clear up a few issues, or assisted in causing awareness towards the issue. This can be found to not only be supportive to the development community supporting the project or the issue, but a great way to immerse yourself within the community of the project.

First, I started off fairly basic (as always) to contribute on the validity of the issue.

Maybe Bug: Context searches in private tabs uses default search engine instead of DuckDuckGo · Issue #12639 · brave/browser-laptop

Although the core team of the project got back to me with:

That clarification certainly helps, for developers who want to take the task on, but as well as my self who may choose to dabble further in this issue.

Duckduckgo as default search engine (instead of google) · Issue #9748 · brave/browser-laptop

This next issue above alludes to a similar premise of the issue from before (pardon the thumbnail of the issue link). If you look closely at the thread within this issue, you’ll notice a cohesive debate and discussion around the use of Duck Duck Go within the brave browser, and making the defacto default search engine DDG. Which stems an interesting set of questions, why is google set as the default in the first place, one contributor mentions that this goes against the privacy focus Brave so heavily markets:

I personally don’t have a stance on this subject, but it’s interesting to see the range of opinions on this matter, and the dislike towards what many would deem as the status quo of search. The truth is already out there though, many engineers and now people know about the ever looming privacy issues with the likes of Google, Facebook, AirBnB and all the big San Fran giants. Mozilla, also slightly falls in this mix, but keeps an alleged imperative stance on user’s data, and the sharing of it. Nonetheless, I personally find it a bit fishy that they’re a near billion dollar revenue driven company (Mozilla), whom core service is the browser. Where’s all this money coming from, apparently from key ‘search deals’, they have with various ISPs, Search Engines, and much more.

The context of Mozilla and their revenue objectives does bring forth some curiosity towards how this model fits within Brave, as the captain of the ship is the esteemed Brendan Eicht after all.

Moving on from that side note, I kept moving through small tidbits I could support on, such as:

Bookmark search · Issue #13172 · brave/browser-laptop

I offered some insight in their thread to draw the connect towards a commit done as it relates to this, but is slightly different:
https://github.com/brave/browser-laptop/pull/4097

Lets try investigating what they did, look more closely at their single commit:

Search bookmarks as soon as characters are typed by MKuenzi · Pull Request #4097 · brave/browser-laptop

Already implies not being too intensive of a problem, makes us get into debugging react components on the live! That’s cool, haven’t done that before — lets give it a shot:

First try at running the debugger, things can never be so straight forward:
Debugger listening on port 5858.

Warning: This is an experimental feature and could change at any time.
Crash reporting enabled
crash-herald.js:24
Unhandled promise rejection in the main process OpenError: IO error: lock /Users/arsalan.khalid/Library/Application Support/brave/ledger-rulesV2.leveldb/LOCK: Resource temporarily unavailable
index.js:45

I basically dropped this after finding the debugger to be a bit of a pain, I know it isn’t the greatest, but I need to keep moving — I think I’ve offered a small gland of support here.

I then closed off this release by looking at a few relevant triage issues:
at: https://github.com/brave/browser-laptop/wiki/(WIP)-Triage-of-issues

And this was the most interesting as it related to one of my other bugs and tests I’ve been looking at: https://github.com/brave/browser-laptop/issues/12120

Seems like the above issue is only testable in master on Windows at the moment. I’ll wrap up here, hopefully this is mildly amusing :)

Thanks for tuning in, these were a bit of my musings through finding areas to mildly contribute towards the community and on-going tasks on the Brave browser project.

Cheers,

Arsalan

by Arsalan Khalid at April 24, 2018 06:09 PM

Testing a browser? Being brave enough to do it. Release 0.5. Half way!

https://github.com/brave/browser-laptop/issues/13894 damn. So what are we testing exactly? We can also find that sometimes our contributions don’t make it all the way, like with this:

https://github.com/brave/browser-laptop/pull/13765

Now I’m looking at testing the browser as a contributor, so I had one of their contributors create a test task for me:
https://github.com/brave/browser-laptop/issues/13904

I was working on a feature related to being able to switch profiles easily, as Lauren mentions that switching builds uses your default prod profile:
https://github.com/brave/browser-laptop/pull/13418

She says: “Do know, that if you download and run this build, it will use your normal ‘brave’ production profile, so if you don’t want to do that, I suggest you rename your prod profile to be something else while testing (brave-prod is a good choice that will keep it from getting over written”.

Pretty cool how it’s related to my PR, although I should finish that bad boy soon…

Moving on to getting their latest build, notice how I have to actually download from their release channel, and test a number of things, probably functionally test these different edge cases against all the bugs that were fixed in the latest build:
https://github.com/brave/browser-laptop/releases/tag/v0.22.665dev

It’s cool because I’m literally downloading the browser raw, running the .dmg package associated with this build, then testing those features. As instructed, I had to run something like:

arsalan.khalid@AMAC02V20QCHTD8:~/Library/Application Support $ cp -R brave brave-prod
to make sure i keep my old profile. It’d be useful if one could switch profiles easily… although this slightly befalls on me.

Getting started with testing

One of the first initial tests is to check the ‘signature’ of the build, I haven’t seen something like this before, which asks to check the signature:
arsalan.khalid@AMAC02V20QCHTD8:~/Library/Application Support $ spctl — assess — verbose /Applications/Brave.app/

Basically they’re looking for the following output:
/Applications/Brave.app/: accepted
source=Developer ID

Moving on to some more tests, it looks like we need to check that all of Brave’s defaults about pages load correctly. A neat little trick I just learned which enables developers to share their full environment, pretty cool,
about:brave returns something like:

It even uses that copy button I worked on in a previous pull request :)

I made sure to go back and forth with their dev team, basically just general interaction with them, as I was confused about going back to test the previous change-set thoroughly:

Test requirement:
Test what is covered by the last changeset (you can find this by clicking on the SHA in about:brave), this is pretty huge — do you expect a developer to test all of the changes in this release, similar to this format?

Fun fact, in the time I’ve been writing this blog post — I’ve already received a reply to the likes of:

Lauren added a useful note for completing one of the test, I didn’t know developers could do such tests using online cookie testers~ you learn something new everyday.

Also looks like the latest build was 12 hours ago, which is 0.66, so I re-downloaded from the releases and picked up from that instead, and now my libchromiumcontent has: 66.0.3359.117, probably the version they’re looking to test as they just deployed.

Fun fact, I had to test actually moving some of my own bitcoin into the browser to test that the deposit is still working, neat because I haven’t actually deposited coins into the browser up until now. Guess I was forced as a developer…

Makes you ask questions whether specific features even exist:
Change min visit and min time in advance setting and verify if the publisher list gets updated based on new setting — where is this feature? Then I found it right in the advanced settings of payments :)

It’s possible that the description for this isn’t clear to me as a new tester, so if I find that indeed this feature does exist, then maybe it’s an opportunity to fix the wording here to make it clearer to anyone else where to find it.

Finally, it looks like I’ve tested something which actually doesn’t exist:
Visit nytimes.com for a few seconds and make sure it shows up in the Payments table — doesn’t show up on the payments table

I think I’ve caught something weird, as I’m seeing nytimes show up now:

its certainly longer than 10 seconds, this is weird. But you know, it’s possible that this is my fault, as the browser is recording the time I actually spend on the tab, rather than the time it is just sitting there left alone. I tested this by going to an article and reading about it, which was a sad incident in Toronto actually. It’s even more sad because the perpetrator was a classmate of mine for the past 5 years, truly disheartening and saddening incident.

In the upgrade section of the tests, not too sure how to do these bits from the UI:

· Upgrade from older version

o Verify the wallet overlay is shown when wallet transition is happening upon upgrade

o Verify transition overlay is shown post upgrade even if the payment is disabled before upgrade

o Verify publishers list is not lost after upgrade when payment is disabled in the older version

I’m not sure if this is asking me to actually backup or recover my wallet, its possible that the wording of performing this test isn’t easy for a noob.

One of the tests asks to visit any YouTube video in a normal/session tab and ensure the video publisher is listed in ledger table, which does bring to attention something strange. If you look at the youtube video I’m watching, peep the time:›

Now notice the time added on ledger:

Strange that they don’t match up exactly, I would expect a slight difference, but this is a fairly obvious difference. I think time recording on media content in Brave has still some time to go, in general. I notice this to be the same for embedded youtube videos too:

And if we look at our payments ledger we find elapsed time to be:

It’s going to take some time to get this right, as there are obviously nuances here, how do you prove the amount of physical watch time with these videos, especially even embedded ones! What if the user is looking somewhere completely else, HCI in practice people!

There’s even a bit to try out the sync feature of brave browser, which is good for me as a new user, but it doesn’t seem very intuitive to setup. Following these steps:

And now if I go to my iPhone, I don’t see any of the referred settings in this list:

Just goes to show that more development, and clearer descriptions are still to come for testers and users respectively.

Guess this means I have to skip all the sync tests… as they’ve confirmed it too :) Always good to get community feedback!

Appreciating About Pages

Another fun fact, who knew Brave had a more detailed Ad Blocker view at:

about:adblock, thats neat, I like all of the things they do with the information exposed through the about: pages.

Hotkeys Sugar

Going through the tests has also brought some cool hot keys to my attention that I haven’t used before like:

Reopen the latest closed tab: Command + Shift + t (macOS) || Ctrl + Shift + t (Win/Linux)

Jump to the next tab: Command + Option + -> (macOS) || Ctrl + PgDn (Win/Linux), I’ll probably be using this one the most.

Jump into the URL bar: Command + l (macOS) || Ctrl + l (Win/Linux)

Pinning Tabs on Brave

Running through these functional tests has unlocked a plethora of features and capabilities I didn’t know existed on Brave, who knew you could ‘pin’ or ‘unpin’ a tab, which basically means that tab will always be available for you to access when opening the browser.

I think especially as developers, we get lost in have hundreds of tabs always open, a simple feature such as this is most useful (notice the Brave logo on the very left):

In closing, this was a cool test to run through, it gave me better insight on using the browser, but as well as how different pull requests could break many things. Maturer UIs such as brave obviously needs a lot of work and testing, and this was a very cool general glimpse of that. I feel grateful for the opportunity to learn, but its important to stay reminded to deliver. I have to be sure now to complete this test in the format the dev team expects, hope this was helpful to any readers learning how to contribute to open source projects.

Final results of test posted here: https://github.com/brave/browser-laptop/issues/13904

Cheers, Arsa.

by Arsalan Khalid at April 24, 2018 05:59 PM


Qiliang Chen

OSD - Release 03

In this release, I'm going to find projects which I'm interested. In this release, I'm going to fix some bugs if possible. I hope to find projects to keep contribute not only for release but also in the future.

I really like Android projects. So I want to find some Android projects to work on. I Googled 'Android open source project', and I found some articles introduce it.
Link: https://medium.mybridge.co/38-amazing-android-open-source-apps-java-1a62b7034c40

I tried some of the projects. I reviewed some issues information. At last I choose Leafpic to contribute.

Leafpic Introduce

Leafpic is an Android open source project provide some basic features for photos viewing, managing, editing and sharing. It's a light application running very smooth in Android devices. It has Simple interface and I like it very much. It has around 1300 commits, 9 releases and 50 contributors. It's a good project for our practices. 

Potential Bugs to Fix

1. Leafpic: Rate app option is not functional.
    Issue link: https://github.com/HoraApps/LeafPic/issues/539

2. Leafpic: Translate to Chinese.

Fix Bug:

1 - Leafpic: Rate app option is not functional.

Issue link: https://github.com/HoraApps/LeafPic/issues/539

This option is to let the user go to the Play Store to evaluate this software. However, this function cannot be achieved as promised. The following video shows this issue. 


I tried this application and found that the problem does exist! So I tried to fix it.

First, I need to find the code which handle this function. I first went into layout 'about' cause this option in in 'about' option. I did find it. 

The rate option has id 'about_link_rate'. I thought when we click 'rate' option, the application would take some actions with element with this id. So I tried to look around every code relative to this id. I greped this id in terminal and I got this:

There're two part of code relative to this id. One is what I've just mentioned in about layout. The other is in 'aboutActivity.java' file. The original code was:
So yes. This is the code to handle rate function. 

I did some research online and learned this code. Specially I learned a lot in this site: 
I found the most rated answer is very useful. What it did is to Play Store app for rating. If the phone has no Play Store app, it will open in web browser instead. This will be more comprehensive because not all phones have Play Store. I used this code to help improving this application. 

But rate option still out of work. I thought the 'getPackageName()' function did not work. I did some study for this function. In Google's document, I found information for this function:
It should work. I had no idea why it didn't work. 

It try to use application name directly instead of using function to retrieve the name. I went to Google Play Store and searched this application through web browser. The reason I used web browser was I needed to get the address of it. I got this:

The part following after 'id=' was what I want to find. I substituted this to 'getPackageName()' function. It worked!

Pull request link: https://github.com/HoraApps/LeafPic/pull/551

2 - Leafpic: Translate to Chinese.

I also did some translation to this application. The project's translation is handled Crowdin. The website's link is here: 
https://crowdin.com/project/leafpic

Before my translation, it's 77% translated to Chinese.

After my translation, it's 93% translated.

Conclusion

Through this release, I tried different Android projects. I understand and join one of the projects and be a contributor of it. I learned to use different skills to determine the location of the bug. Such as using 'git grep' command to find relative code. I also learned to analyze the links between different modules. It takes a lot of time to understand a project. But once you understand it, it's relatively easy and fun to contribute to it. I will continue to contribute to it and continue to explore more interesting projects. I hope to become an outstanding Android developer by contributing to open source projects. 






by Chan Kignor (noreply@blogger.com) at April 24, 2018 05:44 PM


Jeffrey Espiritu

SPO600 Project Follow Up

Inline Assembly Update After benchmarking the inline assembly changes I made on the BBetty and AArchie servers, I was rather surprised that the modified code actually degraded the encoding performance by significant margins. So I reasoned this may have to do with the slow memory access on BBetty and CCharlie in comparison to AArchie because … Continue reading SPO600 Project Follow Up

by jespiritutech at April 24, 2018 04:49 AM


Evan Davies

Release 0.3 - We All Need Some Space

For Release 0.3, I originally planned on working on another Brave desktop browser bug, but after perusing the open bug list, I wasn't sure if I could find anything that really took my interest. Thus, I began to search.

Toiling through Github's repos, looking for a project that was in javascript (a current preference for myself) I came across an interesting concept - an in-browser debugger. This project, aptly named Debugger.html, allows users to debug a web page in real time, without any other external programs. This means you can add breakpoints, run commands, etc.. in a more functional environment as opposed to the "inspect element" function of commonplace browsers. Currently, the project is focused on Firefox, but a Chrome implementation is also in progress.

The bug that I had chosen to work on regarded a visual issue, where there was no padding for one of the panels on the page. This caused the information inside to clump together, and did not follow the padding rules that the other panels followed. The actual issue page can be found here.

My initial thoughts going into this were as followed:
 1. This was likely fixed through CSS
 2. There are probably quite a few CSS files to wade through
 3. Inspect Element will be a good friend


After some searching, testing, and fruitless efforts I noticed this class:


It appeared that the list of elements were all embedded in this "accordion" class. As such, I began to look around Accordion.css. I had found a class inside that matched up with the element spoken about in the issue. As expected, adding a padding to this class worked! I uploaded the file to my forked repo, and issued a pull request. I thought that this would be the end to the bug, and considered looking around for another bug to work on. As I was however, I received a message from one of the developers, stating that changing this class' Css would create a multitude of issues in other areas that I wasn't aware of. He did however, suggest a .js file that might point me in the right direction.

It was back to the drawing board, but with a hint towards what I needed to look for, I was optimistic. As it turns out, the file that the developer had suggested turned out to be the wrong file, but it WAS implemented next to the actual file I needed to look at! This file did not have any connection to a css file, which would explain the fact that there was no padding. I added an import to the Css file in charge of the "Secondary Panes" (The right sidebar elements). In addition, I added a class in the Css file that would catch all divs inside the specific class and add a 4 px padding. A compilation later, and everything was working! I reverted my original changes on my repo and added in my new ones. As of now, I am waiting on a response from the reviewers to see if my request is able to be merged. 

Changing the project I worked on for 0.3 was a refreshing change. Although it was another browser related project, it had a nice change of pace. It allowed me to hone my investigative skills, as well as my understanding of how larger projects function. I will keep in contact with the reviewers, and perform any further changes needed to have my pull request accepted.

by Evan Davies (noreply@blogger.com) at April 24, 2018 04:05 AM


Zhihao Cai

TDD Practice in Brave

For this lab, we’d like to practice the TDD (Test Driven Development) on Brave URL bug.  Essentially, TDD is the process of test-first development, making our code passing the test we just created.

After you start your brave build, heading to the URL bar and input

https://www.google.ca/search?q=dog cat

enter (space in between), you will notice the different result compared with the behavior in Chrome for example.

It turns out Brave doesn’t take care of the spacing in the query string. Instead of returning a search string with “dog%20cat”, we actually got 2 separate string “dog” and “cat”.

Once we have our desired result, we can now add our test case for this specific behavior in test/unit/lib/urlutilTest.jsScreen Shot 2018-04-23 at 10.57.22 PM.png

Note the urlUtil inside the assert statement, it gives us hint where the code might sit. So heading to the js/lib/urlutil.js, navigate to isNotURL function, let’s make changes right before the UrlUtil.getScheme(str)Screen Shot 2018-04-23 at 11.15.10 PM.png

By issuing npm run test -- --grep="urlutil", our test should pass and also the bug should be fixed now
Screen Shot 2018-04-23 at 11.29.29 PM

by choy at April 24, 2018 03:22 AM


Justin Vuu

OSD600 – Lab 6 – Fixing a Bug And Adding Tests

In this lab, we fix an issue in Brave and then build tests for our fix.

The Issue

Brave parses text entered into the URL bar to determine whether it’s a URL or search term. However, there is a bug that if a space exists anywhere in the string that’s not at the beginning or end, it assumes it’s a search string. This means entering “https://www.google.ca/search?q=dog cat” will cause Brave to think we’re literally searching for “cat” and “https://www.google.ca/search?q=dog&#8221;.

 

Current build of Brave

 

For comparison, other browsers like Chrome sees that as a URL by replacing the space with “%20”.

 

Chrome

 

The Fix

Fixing this was really simple: Add a line into urlutil.js that replaces all spaces with “%20”.

And now, URLs with spaces in them will be parsed as URLs instead of search strings.

 

My build of Brave with str.replace

 

Testing

After running a test, we find that there are some tests in place that checks that text with spaces in the URL bar should not be considered a URL. By editing these tests to return the opposite – because they are being treated as URL by the browser – the tests pass.

by justosd at April 24, 2018 03:04 AM


Bakytzhan Apetov

Release 0.3: perf.html tool

For my last release of Open-Source course, I decided to contribute to a project called perf.html, a part of Mozilla‘s devtools.

screenshot.pngperf.html interface

This is how devtools team describes perf.html:

perf.html visualizes performance data recorded from web browsers. It is a tool designed to consume performance profiles from the Gecko Profiler but can visualize data from any profiler able to output in JSON. The interface is a web application built using React and Redux and runs entirely client-side.

Mozilla develops this tool to help make Firefox silky smooth and fast for millions of its users, and to help make sites and apps faster across the web.” (Source: devtools-html)

First, I wanted to tackle the issue #948:

424321414.PNG

This issue happends because of the way devtools team defined the render() function in CallNodeContextMenu.js.

423423423

Notice the <ContextMenu> tag is getting rendered regardless of how many nodes to show. It says in the comment that “ContextMenu expects at least 1 child.” I’ve tried changing this function in several different ways. For example, checking this.state.isShown before rendering, but I couldn’t get the desirable result without breaking the code because the menu expects minimum 1 node for rendering or otherwise the menu won’t show.

Next, I made a contribution to some of the documentation for Contributing.md in the perf.html. The issue is #937.

4124134.PNG

I fixed the labels from Good First Bug to Good First Issue and fixed the links to Issue Page. I’ve also responded to change requests from one of the devs. You can see my Pull Request here.

3123123.PNG

Overall, it was a good experience studying in Open Source course. I want to express my thanks to our professor for introducing to us all of the fundamental things to know and practices used by open-source community. I wish there would have been more opportunity to work on bugs like I did in my Release 0.2 for debugger.html.

This course have also build a stronger foundation of JavaScript knowledge for me. For example, for my Release 0.1, where I learned more about Node.js, Express, routing and testing for building an API with use of Google’s libphonenumber. Many of our labs have also used extensive amount of JavaScript and related frameworks.

I learned a lot about Github&Git workflow. I especially memorized the workflow of “fork, clone, build, fix. add, commit, push” procedure, and I realize its importance for my future work in software development. This is it for my Release 0.3.

Thank you!

by Jean A. at April 24, 2018 02:36 AM


Zhihao Cai

Learn from the Code Infrastructure

For the last release, I was looking into the Mozilla GitHub repo and hoping to find some bugs to fix. Since most projects in Mozilla are split into small components, therefore, there are relatively a lot more miscellaneous bugs compared to the centralized VSCode and Brave project repo.

As for my growth goals, I want to take this chance to learn from the infrastructure and development cycle of the open source project.


The first bug I was working on is to add CSS lint support to the Blurts Server‘s infrastructure. Blurts Server is a node.js prototyping project for Mozilla Breach Alert feature. The fixing went straightforward since the issue page already gives out the solution. All I need is to read through the stylelint repo and understand the basic usage and apply it to the project.

Basically, the package.json is the core when working with node.js, since we not only use it to include dependencies but to define behaviors. Not surprisingly, many of the js libraries are able to work together, which makes the infrastructure changes a lot easier without extra modification.

For example, we just need to insert line #7 in the “scripts” section to enable “npm run lint” command to trigger our stylelint check. And we could customize our CSS checking rules by simply adding the “.stylelintrc” file.


The second one is to remove unused references from the Kitsune, which is a Django application. I haven’t done any Python before, but I didn’t find it so hard to perform a clean-up.

By running the “git grep” command with -il options, I can easily identify where the references are used.

$ git grep -il 'treejack'
kitsune/products/jinja2/products/product.html
kitsune/sumo/migrations/0002_initial_data.py

What really interests me is the “product.html“, unlike normal HTML, they use {%...%} as open/close tag. After research, I figured out the “jinja” in the file path is actually a template engine for Python. And the {%...%} blocks represent the control structures, for instance, for loop and if…else statement.


The last issue I encountered is similar to the second one – do a bit of clean-up, while the topic is about logging in JavaScript.

What I have learned:

As part of the development, a project may need to utilize a various set of tools and libraries to fulfill testing purposes or doing some proofs-of-concept, but as the project progresses, constant monitoring and code maintenance is inevitable.

Learning how to set up and categorize the infrastructure is as important as implementing new features since it certainly provides convenience to maintain code health and adding further supports.

 

by choy at April 24, 2018 02:13 AM


Aliaksandr Ushakou

Release 0.3

The goal of this release is to contribute to a real open source project.

An issue that I’ve decided to tackle this time is “Trying to save page offline always shows Downloading…”. Actually, I had been working on this issue since Release 0.2, but before now nothing seemed to work. (By the way, the project is Brave browser)

So, the issue says that if we load a web page while online and then try to save it while offline, downloading process is running forever and no error will ever be shown.

Let’s try to reproduce it!

save-offline1-gif

And yes, the issue is reproduced! It means that we can try to fix it.

First, we need to find a code block that is related to the downloading process. Usually, if I have no idea where to start searching, I just use the search bar. For example, we can try to search some key words like “download”, “downloading”, “save file”, etc. Ok, let’s say we’ve found a code block that might be what we need. But how to make sure? I think the best way is to set a breakpoint and try to download a web page. If the breakpoint is hit, it means that we found something that is related to the downloading process. Of course, It doesn’t always mean that found code block is what we eventually need, however, it always means that we are somewhere close to it.

The file that I found interesting is  filtering.js . I found out that there is the ‘updated’ event, that is triggered every time I try to download a web page.

code-for-r3-1

So I decided to work on this file.

The obvious idea that came to my mind is to check network connection when downloading process starts. If there is no network connection, downloading process should be cancelled or interrupted. So I started to work in this direction.

I found out how to check network connection and added two events that do it.

code-for-r3-2

After that I changed the ‘updated’ event to include the logic that interrupts downloading process if there is no internet connection.

code-for-r3-3

Ok, time to check if it works.

save-offline2-gif

And it works!

That’s it for today.

Pull request can be found here.

Thanks for reading and take care everyone!

by aushakou at April 24, 2018 01:52 AM


Hao Chen

Tricky Javascript with a sprinkle of React

Tricky Javascript with a sprinkle of React

This week’s blog will summarize my 1+ month journey. I will be tackling this issue within Debugger.html.

The preview gets stuck when the cursor moves quickly over a variable in debug mode. The issue is quite hard to reproduce as it doesn’t always get produced each time the cursor slides over.

I was provided with a possible lead on where things might’ve gone haywire. Exploring the stack has lead me to onMouseOver, where it detects whether the cursor is over a variable. I noticed that this mouse event is attached to something called codeMirrorWrapper. This is a giant invisible mask that covers the entire debugger editor. Also, this function is a debounced function. Check out this link to learn more about debounce. So my initial thought is that the call to updatePreview() was somehow late to the party, using an old Event Target due to debounce. But removing or increasing the timer does not make a difference, so I moved on from this.

Since I was struggling to find a thread to hang on, I thought to myself maybe this bug was introduced in a past commit. All I have to do then is decipher one specific commit.

time to go back in time…commits, close enough.

The term is git bisect. My professor’s blog post is very helpful in getting me started. What I discovered shocked me! The issue existed from the very moment the functionality to preview variables was introduced. So this issue wasn’t a regression of any sort. That much was confirmed.

Moving on…

I began to pair with a few mentors within the debugger community. The following summarizes some of the things we’ve tried or considered:

  • Adding hover events to each individual variable during debug mode? Wayyy too costly in terms of performance(if the codebase is large).
  • Ignore default behavior with Event.preventDefault().
  • Adding/removing async/await to the updatePreview() call.
  • Adding additional mouse states such as onMouseEnter and onMouseLeave to code mirror. Doesn’t pick up individual variables, the call only triggers when entering/exiting the editor mask.

At this point, I’m stumped along with the devs I’ve been pairing with. Time to get my hands really dirty. I began to spam console logs within the mouse events. I noticed something fishy… take a look at the below description of what I observed.

I’m not so sure why another set of onMouseOut and onMouseOver is called. So I added a onMouseOut event that contains the same logic as onMouseOver. Also, I removed a flag in updatePreview() to produce the following.

I got rid of the Preview…but this solution is far from being correct. The yellow highlight still remains.

I made a pull request just to showcase some progress, but I’m hoping to find the root cause in the near future.

I noticed that a class is added to the variable for the CSS to highlight. Further digging around lead me to a componentDidMount() call that marks a specific range of characters with this class. From here, I took a moment to explore a quick overview of React and the lifecycle of a component. Once again, I spammed the lifecycle calls with console logs. The popup was being rendered right after being unmounted. Which leads to componentWillUnmount() not being called to clear the marker(for highlight).

This is how far I’ve gotten. I look forward to continuing to tackle this issue in the near future.

TLDR: I’m still dealing with a tricky JavaScript issue that is hard to reproduce and hard to pinpoint the cause. Got the awesome opportunity to pair program with 3 other developers around the globe. Pushed myself to persevere and break problems down + learned a lot about JavaScript, React and Redux!


Tricky Javascript with a sprinkle of React was originally published in Haorc on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Hao Chen at April 24, 2018 01:27 AM


Sean Prashad

A Challenge to Myself

The view from 8 months ago

25 lessons that OSS has taught me over the past 8 months

Here I am at the end of an 8 month long journey; a road trip that originally was set to be 4 months but ended up being one hell of a ride that I couldn’t resist a second serving. 20 blog posts later, I realized that I’ve learned a lot and I want to share it with you.

Without further adieu, here are 25 lessons that I’ve learned from my journey in Open Source:

  1. Open source has no rubric — Landing a merged PR doesn’t get you an “A+” or “B+”. Rather the process is more valuable and as such, I derived way more enjoyment from putting the puzzle pieces together rather than the final image I saw.
  2. A “bug” is more than what you think it is — Most people think bugs are undesired features but it encompasses things like spelling errors and unclear documentation!
  3. Not knowing a tool/technology shouldn’t scare you — Many students think that if they don’t know a language such as “JavaScript” or “Python”, they can’t contribute to those kinds of projects. You’ll need to use another excuse! Getting involved with documentation was how I first contributed to Rust when I hadn’t written a single line of Rust yet.
  4. The meaning of “community” — Community is something that I was fortunate to have experienced. Mozilla’s AMO project has been nothing short of amazing. The devs have invested copious amounts of time into guiding me through a completely new codebase to help me succeed where I once failed.
  5. Documentation is what turns good projects into great ones — I’ve come to appreciate clear and concise documentation in projects. Even more so, clear examples are a godsend!
  6. Explaining technical work without technical jargon is challenging — I’ve gained a much greater appreciation for conference speakers who speak to a general audience. Our bi-weekly demos since January have opened my eyes to the skill that it takes!
  7. Seek first and seek well — When friends ask “How do I do x?”, I think to myself “Why are you asking me? That information is available somewhere!”. Now this isn’t to say that I think my friends are helpless, but rather that I’ve developed a mindset to “seek first and seek well” before asking questions.
  8. Open Source allows me to give back — Through finding my own success, I’ve been helping others find success too! Hao, Jafar and Chaya needed some help when things got bumpy, but once they got going.. well you can see for yourself in each issue 😁
  9. Every bug is a story waiting to be told — Every bug has it’s own unique story behind it — and it’s up to you to help write the epilogue! What’s even more awesome is that I’ve been able to share my stories during interviews to score both technical and behavioural points 😎
  10. Hands on experience that employers want to see — The technical know how gained through solving bugs whether it be from documentation, front-end, back-end, tooling and so much more is something that employers love to see!
  11. One of a kind learning experience — The courses taught by Dave are stimulating, challenging, rewarding and have truly been one of a few highlights of my 8 year Senecan career. I always looked forward to new material that was relevant and up-to-date with what was happening in the tech landscape.
  12. Blogging — Blogging was something new and unfamiliar to me. I have to admit, it’s a lot harder than I thought trying to translate information into words. The great thing is that I’ve left my mark on the web for everyone to read!
  13. Live-streaming is fun!— I’ve found that I prefer live-streaming my work on bugs rather than blogging about it. The biggest downside? Nobody else wants to watch me spend hours on end to fix a bug… 😶
  14. Open Source is a lifestyle — I can easily see myself working full-time in a community as welcoming as AMO to help give back to those who were in the same place that I was 8 months ago. Open Source is the lifestyle of working in the open and embracing the community.
  15. Networking — Networking is something that not everyone is great at but I challenged myself to attend at least one event dealing with Open Source this semester — the end result? A handful of us visited Rangle.io back in March and Mozilla’s Toronto office in mid-April! Take the opportunity to ask and you never know what might happen!
  16. Sharing the experience — Engaging with the community via Twitter is something that I wished I did from day one. “Facebook is the people you went to high school with. Twitter is the people you wish you went to high school with.” — David Humphrey, 2018.
  17. Standing out — Like I mentioned back in #9, my work in the Open Source realm has been a focal point of conversation during interviews for co-op. More so, it has even helped me to stand out amongst UofT/Waterloo candidates 💪🏽
  18. Never stop learning — Every week there was something new to learn in class — from linting to licenses to Git and so much more — Check it out here!
  19. Open Source brings like-minded individuals together — Surprisingly, our class was only about a dozen students but everyone was here because they wanted to be, not because they had to.
  20. My work is available for anyone and everyone — Through my 8 months of Open Source, I’ve cultivated a portfolio of work that includes over 20 blog posts, 18 landed patches with 5 WIP! See for yourself by searching me on Google!
  21. Starting can be hard but it’s very rewarding — It’s very intimidating to start but once you land your first bug, the feeling is like no other. The key? Be humble and understand that your first bug might be very small but know that you’ll continue down the road to more complex ones in time.
  22. You never know whose watching — Because you work in the open, you never know whose watching! I experienced a surprising message back during one of my bugs for AMO in which I was vouched for my work! Check out my tweet here.
  23. Create a Twitter and follow topics in the OSS world that interest you — I’ve learned a lot just from following individuals on Twitter! Check out @bork and @MargoChepiga for starters!
  24. You can work on any part of a project if you want to —For some projects, you can immerse yourself in anything from UI to documentation to linting to tests! Go wild!
  25. Code literacy — Being able to read someone else’s code, whether it was written 10 years or 10 days ago, is a crucial skill as we’ll have to work with others in the future! I’ve been practicing this for months and have a good idea on where to start searching for features in code using things like git grep!

And so… 25+ demos given, 21 blog posts authored, 18+bugs fixed, 2 field trips attended and 1 set of stickers later, here I am 😁

Phew.. that was a lot to say. So with all said and done, this post will serve as a memoir to my future self to never give up when things get tough, to always keep learning and to continue giving back to the next generation. More importantly, let this post serve as a challenge for me to come out with another 25 lessons learned in the next year.

Like they say — once something’s on the internet, it’s there forever… Now I’m wondering if I bit my tongue too soon..

Ah well, onto the next chapter! 😼

Sean


A Challenge to Myself was originally published in Open Source @ Seneca on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Sean Prashad at April 24, 2018 01:04 AM

April 23, 2018


Margaryta Chepiga

Failure?

https://medium.com/media/a26d3e3fc1ee2a9b817ab91254b5f4b9/href

Have you ever had a situation when you finally fixed a bug after putting an enormous amount of time & effort? Rhetorical question, isn’t it? I am more than sure that the answer is yes. Do you remember how you felt? Do you remember that exact moment when you realized that you found the solution and it works?

When I found the solution to this bug ( blog post about the issue is here ), I felt ( and reacted ) approximately like this:

https://medium.com/media/03a419c987c561cdc62c417a6c0f260b/href

But…

In almost every story, there is a but. I found a solution. I checked it. I double check it. And then I checked it again. I couldn’t believe that I did it. I send a pull request, added screen captures and then I got the best feeling ever. The feeling which is the reason for me waking up every morning and the reason for me to not to sleep at night. I felt accomplished. I felt like I did something today. I felt complete. With that, I finally went to sleep.

It was a terrible night. I was sleeping and not sleeping at the same time. All night long, in my dream, my solution was not working. For various reasons. All night I had a feeling that I did something wrong, a mistake. That in reality my solution is either wrong or I just mistyped something and solution is not even a solution. Then I thought that I haven’t fixed it at all, and it was only a dream. So I would wake up in the morning and the solution would not be there as well as the PR.

I woke up before my alarm ring. Went straight to my laptop. I had to know that it is there. I had to know that I fixed it. It was. I felt relieved. But not for long. At the back of my head, I had this annoying feeling that

  • It is not a perfect solution
  • It is wrong, you just don’t know about it yet
  • There must be an edge case that I haven’t covered yet

The weird thing is that my feeling was right. In a couple of days my PR was reviewed and not only my solution was not the best, as it turned out later it was causing a bug.

https://medium.com/media/15cbddc674db3570db751a4b28177f37/href

So the original code was:

https://medium.com/media/bed18ce17adcff20c9f9986569c9f187/href

My first fix was the following:

https://medium.com/media/118a5962a42412516c9a795e746dda96/href

Where I basically checked if the url is a new page url, if not then we won’t reset the state.

Even though it looked like it worked ( as in icon was not disappearing anymore ), it didn’t.

According to the various console.log’s the

getBaseUrl === getTargetAboutUrl('about:newtab')

would always return true.

After a couple of hours of debugging & a couple of “try and fail” I found out that if I put the result of the statement in the variable and use a variable instead, the result won’t be always true. Fix number two:

https://medium.com/media/3a0d425ca1045eb9f09cbc59f8645399/href

Which means that it works as expected. However, this solution was causing problems too.

I was devastated. I’ve spent so much time and effort. Thought that I found the solution. Twice. But still, it wasn’t it. Give up? Move on to another issue and just forget about it? I couldn’t. After weeks of debugging, understanding the code, involving other people, I just couldn’t drop it. There are certain situations and issues when it is a smart decision to move on. This one felt like it wasn’t. It was certainly hard to keep going. Hard to not give up. But can I grow and learn without overcoming the obstacles? Should I just take the easy way and do things that are familiar and easy for me? What will that decision give me? How would I benefit from it in the future? Apparently, I am just not that type of person who gives up and looking for an easy way. I knew that before, otherwise I wouldn’t be where I am right now, but not always I thought that this is a good thing.

I kept looking for a proper solution and I think I found it.

https://medium.com/media/0d5770c421b99509939d52d9cd8622f7/href

Originally in windowStore.js we had the same code but without the extra, if statement that you can see above. So basically, I checked if app download action was performed and if not then we want to reset the state. Result? It worked.

https://medium.com/media/c5b16f8057aa0560eb5df0528f2b7f73/href

Looks extremely easy. Works perfectly. Was it though?

To be honest, I am still not 100% sure that this solution is the best solution. My PR was not re-reviewed yet. Therefore, to sleep better at night I checked all cases that I could found and made sure everything works as expected.

Be Brave. Don’t give up. You are not a failure.


Failure? was originally published in Open Source Adventure on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Margaryta Chepiga at April 23, 2018 10:42 PM


Hongcheng Zhang

OSD600 – RELEASE 3

In release 3, we are asked to continue working on real open source projects, but more. It means we need to fix or contribute more issues than release 2 to show a degree of growth. So,  I decided to fix more than two issues in different area from release 2. I found and contributed three issues.


what  I have done

The first one is Mozilla Science Lab. It is a community of researchers, developers, and librarians making research open and accessible.

The second one is Mozilla Office. Public Corsica instance for Mozilla office and home offices. If you are at a Mozilla office, this project is what powers the content on the flat screen TVs throughout office.

I found two issues about https. Http is a protocol that allows communication between different systems. It is used for transferring data from a web server to a browser to view web pages, but there is a issue that data is not encrypted. Therefore, we strongly suggest to use https, the ‘S’ means secure. Https involves the use of SSL certificate. It creates a secure encrypted connection between the web server and then web browser.

There are lots of insecure urls in above two projects. They want to convert the urls where already support https to https.

When I try to convert http to https in first project. I found more than 100 http urls, so I use switcher in VScode to convert all http to https. I did not realized some http urls sill not supported https. Of courses,  something is broken including links and images. Therefore, I have to go back on the origin one and double check the transform one by one to make sure each one is good.

Here is the first PR and second PR.


The last project that I contributed is Kitsune, which is the platform the powers SuMo( support.mozilla.org). There is a url pointing a Troubleshooter addon in settings.py which currently returns 404. It is not available. Therefore, I remove all references to troubleshooter addon. Here is my PR.


Conclusion

That is my last semester. To be honest, this course is most useful that I have took. I mean I attend college for finding a job. I have learned some language including C, C++ and JAVA etc, but all is basic and  I have to improve these skills to satisfy job requirement, but OSD600 is really related with job. In release 1, I created open source RESTful API and familiar with how to use GitHub, In release 2 and 3, I contributed real open source project. These real experiences what job required. Nice Course!

 

 

by hongcheng1993 at April 23, 2018 10:37 PM


Woodson Delhia

Open Source Release 0.3: Miso-Haskell Function Helper

 This is a blog is about my last open source contribution for my open source course.

About The Project

The project that I decided that to contribute you to is called Miso. Miso is an open source front-end framework, written in haskell, for building interactive single page web application. Miso is heavily inspired by The Elm Architecture (TEA) and also uses GHCJS to perform Javascript FFI. One thing to also note Elm is also built in Haskell, however, also of the Haskell great feature haven been cut down to be the framework easy to handle for beginners.  Here is the github repo of Miso link where you can find more information.

Why Miso?

I have been looking for the past couple weeks for a small front-end framework written in haskell that I can use to create a small drag and drop file upload widget. The intention is to connect my sforce-migration tool with the widget. My small library will parse yaml <-> XML salesforce projects. Due to the lack of documentation about salesforce front-end framework I initially had the intention to write the widget in Purescript. However, this would have also ended with me re-writting my library in purescript and losing some of the strong library within Haskell ecosystem. After countless of search, I finally stumbled on Miso’s webpage then decided to give it a try.

About The Helper Function

Setting up Miso was quite simple (I was actually expecting a painful process). After setting up Miso and starting the starter project that was provided, I decided to tackle the implementation of the drag and drop widget. Miso is very intuitive, the framwork allows us to create HTML element with nice helper function such as div_ and h1_ pretty much the all the standard html tag element names with an underscore.  Below is the function type signature for the div_  function but most element have the same type signature. Each element accepts a list of attribute that may trigger an action and it also accepts a view a action which is pretty much the content within the element , and finally it will return a view action after receiving the two arguments.

The way to construct an element with attribute requires that we import the Map module and use the singleton function which creates a Map with a single key as the attribute name and the value as the value.

However, combinig attributes can really get tidious, so I decided to create a helper functions using the Monoid and Map Module.  The function (<>) from the Monoid module is equivalent to mappend. So, I thought about creating a similar helper function to facilitate the way we combine attributes and also make it easy to read. Thus, the creation of this smart Attribute constructor  (=:). It abstract the M.singleton function for us and allows us to use the function as an infix function. Which means instead of

functioname arg1 arg2

we can use the function like so

arg1 functioname arg2

and we can now use the (<>) to combine them like so

(arg1 =: arg2 <> arg1′ =: arg2′)

Below is a code example.

And that is pretty much it! Here is the link to my PR https://github.com/dmjio/miso/pull/409. We can also see my code here in the Miso.Util module https://github.com/dmjio/miso/blob/86f7a3e07af5de2217800237078ba2c492aa7c74/src/Miso/Util.hs.

 

by Woodson Delhia at April 23, 2018 08:47 PM


Bakytzhan Apetov

Lab 6: Fixing White Space Search

For this lab, we’ve tried to work with Brave Browser and how it handles white spaces in the URL bar.

We noticed that while Google Chrome or other browsers return white space after searching this string: “https://www.google.ca/search?q=dog cat”, Brave Browser returns in with %20 representation of white space. We needed to fix this bug.

I followed the standard fixing-a-bug routine by forking the Brave Browser repo, cloning, and changing the file contests.

What I changed is:

  1. I added the function to replace white space in js/lib/urlutil.js.

4234234.PNG

2. Then added a test case for this bug in test/unit/lib/urlutilTest.js (I moved the second string on the next line because it wouldn’t fit in the screenshot otherwise):

423423423.PNG

So after that it was working:

jean.png

by Jean A. at April 23, 2018 08:24 PM


Justin Vuu

OSD600 – Lab 2 – VSCode

For this lab, we installed Visual Studio Code as well as built our own version of it.

VSCode has proven to be a very useful… lightweight?… tool in coding throughout this course. Being able to code, build, debug, and test code in VSCode has made developing code much easier!

I didn’t install any extensions. I found that what was available by default gives me everything I needed for this course up to now. Perhaps I should explore what extensions are available.

Building my own version of VSCode

I did have some difficulties at first with trying to build VSCode on my machine. Mainly it had to do with the prerequisites. After scratching my head for a second, I decided to just uninstall the prerequisites and try again from the top. I’m not sure which step I missed or did wrong, but the build completed successfully the second time through!

Live Debugging

Arguably the best part of VSCode. It took me a while to get the hang of it at first because this was all new to me. Even in INT422, which we used Visual Studio, I never used the live debugging feature.

Now, I used the live debugging feature when working on releases 0.2 and 0.3, as well as lab 6. Being able to see what was going on with the code while being able to make changes to it live was like magic. No joke. I can’t go back to the old ways of Notepad++ and Vim, saving, building, testing, and then manually figuring out what happened.

Electron

This is also when I was formally introduced to Electron. I have used another program built with Electron – Discord – but I never knew what it was back then.

So what is Electron?

It’s an open source framework for creating desktop apps like it’s a web app. Essentially using HTML, JavaScript, and CSS to make desktop programs.

by justosd at April 23, 2018 08:13 PM

OSD600 – Release 0.3

For this release, we were tasked again to contribute to an open source project, with the idea of doing something “more” than in our previous release. “More” in this case means doing something different or more challenging so we can grow as contributors.

Returning to Brave

brave_logo_2color_512x

I decided to focus on Brave again for this release because I was already familiar with the project from before. Fixing the issue I chose for Release 0.2 has taught me a fair amount about Brave’s inner workings.

Growth Goals

In order for us to grow, we had to aim higher. We were given some suggestions of goals to help us, and these were the ones that I chose:

  • get more involved in the community
  • to work on more bugs than last time
  • to gain more experience in different areas of contribution

Originally, I picked to work on more bugs than last time. However, due to the time it took to discuss the first bug I took on, I figured that working on multiple code-related issues would not be feasible. In order to achieve my first goal, I also had to look into another area of contribution and that was to update their documentation.

Achieving My Goals

Joining The Community

For Release 0.2, all I did was comment on a triaged bug that I wanted to work on it and then created a pull request. I never got involved with the community at all.

This time I joined their Discord and took part in discussions. I also chose to work on a more recent issue that was getting some attention. I brought up the possibility of localization issues that the fix would introduce, as well as my approach to resolving the issue.

Working On More Bugs

Working on more bugs seemed like it would be simple at first. However, as I mentioned in the previous section, it did come to a point where it didn’t seem like it would be possible. Getting feedback was pretty quick at first, but as the week drew to a close, responses were taking longer and eventually I got no responses at all. Brave is currently undergoing a big upgrade so it’s likely all team members were focused on that.

In order to achieve this goal, I had to find issues myself. I assumed that finding code-related issues would be very difficult, so I found issues in their documentation instead. This would be more beneficial to me as I hadn’t contributed to documentation in the past, and I can make that a growth goal!

With those two issues, I’ve basically achieved this goal. I know it’s only one more than my previous release. I did originally aim for 3, but I had to downscale due to time.

Gaining Experience In Other Areas Of Contribution

To achieve this goal, I went through Brave’s documentation. I originally expected that I’d only be fixing the odd typo or grammar error. Luckily, it didn’t take long to find a document that was outdated and had a glaring mistake.

My Contributions

Improving About:Passwords

This issue was filed by a collaborator. In Brave, about:passwords is a page that lets users manage the passwords the user allowed the browser to store. At the top of the page, it instructs the user where to go if they want to change how their passwords are stored.

Context menu on Mac

Currently, the page suggests users go to Preferences > Security. In some ways, there’s nothing wrong with this because, on MacOS, Windows, and Unix, the name of the menu to access the Security section is called “Preferences”. Additionally, the URL to get to Preferences is “about:preferences”.

The issue occurs when users try to access Preferences through the context menu. On MacOS, the option in the context menu to get there is aptly called “Preferences”. However, on Windows or Unix, the same option is called “Settings”. Now the instruction may not make sense to some users on using either of those two operating systems. Savvy users may figure out that it means “Settings” because it leads to about:preferences. Other users might go looking for a “Preferences” option.

 

Context menu on Windows

There are two ways to fix this: Either remove the check in the context menu that checks for the OS and changes “Preferences” to “Settings”, or add a check to about:passwords that changes the instructions. I assumed that there was a reason for the different name and that the check was added in later in development. With that, I approached the issue with the second option.

Working On The Solution

There are three files responsible for the passwords page:

  • about-passwords.html – the page that is loaded but we can ignore this file
  • passwords.js – renders the content. It’s referenced by the HTML file, and uses strings from…
  • passwords-properties – the localization file

Currently, in passwords-properties, the string for the instructions is stored in one variable.

This needed to be split into three: One that holds the instruction that is common for all three OS, one that holds part of the instruction specific to MacOS, and one that holds the part of the instruction specific to Windows and Unix.

In passwords.js, I needed to modify this block of code that changes which instruction is displayed depending on the OS.

First I needed to import “isDarwin”, which is a function built to check if the OS is a Mac.

I changed the above block of code so that the text is in two <span> tags inside the <div> at line 232. The first span would have the ID matching the common instruction, and the second span would use an inline condition statement to change its ID depending on the OS.

The user who reported the issue also suggested making the instruction a link that takes the user to the Security page, hence why the second span has an onClick property.

I added a bit of styling to make the link apparent. For the most part, it seems to work, but when I asked a friend to test my branch on their Mac, the link wasn’t orange.

How it appears on Windows
How it appears on Mac

For the sake of the assignment, and with the approval of the collaborator, I created a pull request labeled “work-in-progress”. Though the semester is over, I really do want to see this issue through to the end.

Updating componentStructure.md

The componentStructure.md document is extremely outdated. This document explains how a component is created – what it extends – the hierarchy of the compoenents, and a glossary explaining each component’s function.

Most of the information in that document reflects what Brave was like 3 years ago! It has changed drastically in that time.

On the image to the left, you can see that there are only a small handful of components. A cross-section of what Brave was like in it’s early life. Back when every component was stored in the js directory.

Today, Brave has well over 100 components. Some components have been restructured and renamed as well.

3 months ago, a contributor updated the hierarchy to what you see on the left. However, the contributor erroneously thought that it meant the directory structure of Brave’s components. It’s actually a structure of how each component references another. So now, the hierarchy is a strange mix of an outdate component tree and its current directory tree.

I filed this issue myself, made corrections, and submitted a pull request.

Changes That Needed To Be Made

For starters, the very first line in the document states that all components extend ImmutibleComponent, which in turn extends React.Component.

This is no longer true. A quick look at many components shows that they extend React.Component directly:

So I changed this like so:

The hierachy needed a serious update. Some components like “App” has been changed to “Window”. I undid the changes made by the previous contributor which replaced “Main” (a component still in the program) with “Renderer” (a directory). Then I added every new component Brave uses. This added over 100 entries, totaling 180 items in the component hierarchy.

To give you an idea of how much has changed, see above how Main (or Renderer) directly uses 4 components. This is how many components Main uses now:

I added the new components to the glossary and explained them to the best of my ability.

by justosd at April 23, 2018 07:22 PM


Yalong Li

OSD Release 0.3 final post

When I try to fix this issue in debugger.html, I found some other bugs. Both issues are related to the "Set directory root" menu button on the left side panel of the debugger. They are not in the issues tab.

The first issue occurs when there is a webpack folder. When trying to set the sub directory to the root, the content of the folder will be missing, but when setting it up directly, the content is rendered. It happens because of the webpack has different url than the ordinary ones.

Issue 1 - STR:
  1. Go to https://firefox-debugger-example-react-js.glitch.me/
  2. On the left panel, right click on "Webpack" folder and Click on "Set directory root"
  3. Then right click on "app" foler and Click on "Set directory root" ( notice the content is missing ).

Issue 2 - STR:
  1. Go to https://davidwalsh.name/
  2. On the left panel, right click on "davidwalsh.name" and Click on "Set directory root"
  3. Then expand the subfolders; right click on "libs" and Click on "Set directory root" ( notice the content is missing ).

                         issue 1:                                                                                    issue 2:
                                                      
I fixed both issues and added test coverage to the code. It a learning process while debugging the issues. The pull request can be found here.

Updates:

Another member of the devtools/debugger asked me to fix the issue but I did find where the source code for it. So, I went to David's office asking for help. He was knowledgable and experienced when trying to track down the bug. We spent about 15 minutes and found about where the bug was. Compared to me spending hours doing it alone, David saved me a bunch of time on debugging. Big thanks to him. 

So, I wrapped up the code and updated to my pull request on GitHub. 

by Yalong (noreply@blogger.com) at April 23, 2018 07:17 PM


Joseph Pham

OSD600 – Final Release

For the final release, I decided to stick with Firefox Screenshots. I was still having issues with debugging the extension, so this time I took a different approach. I looked through the solved/closed issues to see if there was any mention of debugging or something that could possibly help me. I stumbled upon a closed issue that used Firefox Nightly to recreate the issue. Maybe if I tried recreating bugs with solved issues, it will help me  with my debugging issue. I felt like I was getting closer to being able to debug the extension, but again, no luck. With all of  these problems and no help from their documentation, I decided that I should document how to install the extension for Linux.

I remember when I first started on this project, it took me about 4 hours to install PostgreSQL and to get the server up and running. Now that I am familiar with PostgreSQL, it took me about 10 mins to uninstall, purge and reinstall it. The installation isn’t too difficult if you know what you are doing. I uninstalled and completely purged PostgreSQL from my laptop. I had to make sure that there was no trace of it left anywhere on my system. I had to kill the open ports, stop the services and then uninstall the program and all of its dependent packages. I reinstalled the database and ran the program. It worked! Now, I had to uninstall and purge again and start documenting.  I think I did this about 5 times before being confident enough to submit a pull request. This was their response:

Screenshot_2018-04-23 Download , Share and Delete buttons do not loose their highlight after each respective action is can[...]

What was frustrating about all of this is that their README only says “Install PostgreSQL”. There are a bunch of additional steps you need to do before getting the server up and running and to get the extension to run on localhost. If there were instructions initially, this would have saved me a lot of time (and tears). I had hoped that with my contribution, I could have help somebody with the installation and with no troubles. I understand where they are coming from, but they should have at least linked PostgresSQL’s instructions some where on their page.

For my second bug fix, I found a CSS issue that causes buttons to remain active, even after being clicked. 30171034-69584802-93f9-11e7-87a1-93d401518abe.gif

I found this bug quite simple to solve. Now that I am familiar with the code, I located the CSS file that contained the styling for the button. The issue was that the styling for hover and focus were the same. I separated the two attributes and removed the background-color for focus. I kept the border however, so that you can still distinguish if the button is in focus or not. I submitted the pull request with no issues this time and hopefully they accept my code change.

During the last 4 months, I gained first hand experience in the open source world. I was hesitant in the beginning, thinking that this would be to difficult for me. At some point, this was true, but I still tried my best with these assignments. For this last release, I felt like I learned how challenging yet rewarding open source projects can be. This was definitely a learning experience that I can carry forward throughout my career.

 

by jpham14 at April 23, 2018 07:03 PM


Aliaksandr Ushakou

A first glance at Open Standards

Software testing is very important for any project. It is important because people rely on stable and error-free products. Testing Open Standards like ECMAScript is even more important because every project that uses ECMAScript depends on it.

By the way, what is ECMAScript? ECMAScript is a scripting-language specification standardized by Ecma International in ECMA-262. It was created to standardize JavaScript, so as to foster multiple independent implementations. JavaScript has remained the best-known implementation of ECMAScript since the standard was first published, with other well-known implementations including JScript and ActionScript (Wikipedia).

JavaScript is one of the most popular programming languages lately. Many have heard about JavaScript, but not everyone knows that JavaScript is a trademark owned by Oracle. Using trademarks can lead to all sorts of problems, therefore, lots of developers use ‘ECMAScript’ name instead of ‘JavaScript’.

Ok, let’s take a look at the test itself. Here we can find steps for running these tests.

Usually I use Windows PowerShell and in most cases everything works fine. But this time something went wrong. When I used the following command  test262-harness test/**/*.js , tests started to run and everything seemed fine. Tests were running and running, and after waiting an hour my patience was over and I pressed “Ctrl + c” for stopping the tests.

It was clear that something was wrong, but I didn’t know what, and therefore, I decided to wait until testing is over. It took more than 12 hours and it ran 58797 tests.

pwsh

Knowing that Windows sometimes has unexplainable issues, I tried to use Git Bash and it worked!

gitbash

58797 tests on PowerShell vs 205 on Git Bash
 

After that, I had a look at the Array.prototype.reverse() tests. I chose the first test and studied it. After that I rewrote it using the  assert()  function. The result can be found here.

by aushakou at April 23, 2018 05:22 AM


Abdul Kabia

On the fileside of things

Hello there, reader, and welcome to this post. You should know me by now, if not, my name is Abdul Kabia! So the past couple of weeks I was tasked with finding a bug or issue on a Github repo and making a contribution to it. Now this is something I've done before, but this …

Continue reading On the fileside of things

by akkabia at April 23, 2018 04:21 AM


Matt Rajevski

SPO600 Project – Part 3 Reflection

After exploring the source code some more, I have come to the conclusion that this program has already been optimized to the fullest. Any improvement I can think of is already in place or doesn’t provide much of an improvement to the performance. The original 7Zip was initially released on July 18, 1999 so it has had plenty of time do develop before the Linux port was created. The last time the p7zip source code was updated was July 14, 2016 with the latest patch 16.02 adding a few bug fixes like a memory access violation fix, the sha1 function not working for certain situations.

I chose a file compression software for this project because it was really interesting how a program can take x amount of data, shrink it by 5%-40+%,  and still be able to decompress the data with it still being readable. This process is extremely complex because if the algorithm has even a 0.1% error then a 1Gb file could lose 1Mb of data and that could be part of an audio file that might be distorted, a video file missing a frame or two, or a program file that would cause the program to crash.

The process of optimizing this program was hard because the optimizations are already in place. The programs that need the most optimization are the ones that haven’t had much time in the open market. Over time bugs will be found and fixed, and performance improvements are made in the areas that need it the most. The great thing about the open source community working on improving a software is that there are potentially hundreds of people looking at the source code and one of them might notice an improvement that the others didn’t. This also takes some stress off of the original dev team so that they can put extra focus into adding features to the program.

When trying to initially benchmark the program using gprof, I had run into many issues trying to get it working. I had never used makefiles before, and after discovering how useful they can be I will now use them more frequently. The source files had included a massive list of makefiles that were designed for different CPU architectures and OS’s, and this included one to setup the program to be used with gprof. The program compiled fine, ran fine, but when trying to use the gmon.out file it would mention an ‘unexpected end of file’ which leads me to believe that something went wrong when creating the file. Luckly for me, the program had a built-in benchmark option. It didn’t provide the same results as the gprof program would’ve, but it did run multiple tests on each of the functions used in the compression/decompression components.

// Overall program benchmark //
[mrrajevski@aarchie p7zip_16.02]$ 7za b "-mm=*"

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs LE)

LE
CPU Freq: 1998 1999 1999 1999 1999 1999 1999 1999 1999

RAM size: 16000 MB, # CPU hardware threads: 8
RAM usage: 1802 MB, # Benchmark threads: 8




Method       Speed Usage   R/U  Rating  E/U Effec
             KiB/s     %  MIPS    MIPS    %     %

CPU                  661  1999   13223
CPU                  624  1999   12478
CPU                  666  1999   13321  120   800

LZMA:x1      47086   651  2646  17213   159  1034
            149903   657  1859  12209   112   733
LZMA:x5:mt1  10144   659  1924  12673   116   761
            145518   658  1864  12272   112   737
LZMA:x5:mt2  10718   682  1963  13390   118   804
            142188   640  1873  11991   112   720
Deflate:x1  110402   642  2184  14018   131   842
            445994   621  2232  13858   134   832
Deflate:x5   40482   636  2449  15587   147   936
            448804   623  2235  13934   134   837
Deflate:x7   15434   662  2582  17101   155  1027
            475964   657  2247  14771   135   887
Deflate64:x5 38920   672  2503  16819   150  1010
            499315   697  2242  15621   135   938
BZip2:x1     23042   651  2140  13922   129   836
            109903   640  1862  11914   112   716
BZip2:x5     17371   660  2198  14498   132   871
             63509   661  1887  12466   113   749
BZip2:x5:mt2 17328   682  2119  14462   127   868
             62237   688  1776  12216   107   734
BZip2:x7      6172   678  2359  15991   142   960
             63919   653  1920  12535   115   753
PPMD:x1      15871   663  2475  16415   149   986
             12652   646  2308  14899   139   895
PPMD:x5       9558   665  2438  16200   146   973
              8063   652  2319  15111   139   907
Delta:4    2386459   642  2286  14662   137   881
           2062576   634  1998  12672   120   761
BCJ        3844213   673  2338  15746   140   946
           3679123   644  2341  15070   141   905
AES256CBC:1 478672   630  1867  11764   112   706
            488557   649  1850  12007   111   721
AES256CBC:2

CRC32:1    1590921   653  1774  11582   107   696
CRC32:4    4273755   651  1466   9539    88   573
CRC32:8    6023965   646  1265   8168    76   491
CRC64      4182943   657  1303   8567    78   514
SHA256      829640   622  2720  16925   163  1016
SHA1       1175588   622  1770  11004   106   661
BLAKE2sp

CPU                  596  1999  11913
------------------------------------------------------
Tot:                 656  2037  13350   122   802

 

The “biggest” optimization I did find was changing the -O flag to -O2, and this provided roughly a 0.5s improvement when compressing a 193Mib folder. The optimizations I did manage to find in the source code, despite them providing a ~0.01% improvement, were some of the optimizations we went over in class. The multiplication operation was the main focus of the optimizations I made. The functions were all calling a multiplication from within a loop and to fix that I managed to take them out of the loop or used a fixed value instead.

I never managed to find any worthwhile changes that I could push to the community to use in the next build of the project, but I did look into the process. The webpage found here https://sourceforge.net/projects/p7zip/ contains all the source files, documentation, a support forum, and a ticket system used to track and push changes. Interestingly enough, there is an open ticket for patch 18.01 that was created on February 05, 2018. The ticket is currently empty, but that doesn’t mean that someone isn’t working on a bugfix/feature/optimization.

This class managed to shine light on areas of programming that I had never seen before. I had never dealt with assembly, and seeing the inside of a C program really showed the complexity of the deeper levels in programming. Another thing I had never used before was makefiles, as mentioned earlier. They allow you to manage programs that require multiple files and provide a simple way to build your programs in multiple ways. I’ve always tried to write my programs in a way that is optimal, but after taking this course I realized that the compiler will do most of it for me. I never got to fully experience what it was like to contribute to an open source project, but trying to made me have alot more respect for those who do contribute.

– Matt Rajevski

 

by mrrajevski at April 23, 2018 03:59 AM


Connor Ngo

Conclusion and some data

I wanted to get some before and after videos together but my recording software was outputting unusable files and I didn't have time to fix it. I did however have time to collect data about my optimization!

 

- Over a 10 second period of randomly duplicating circuits built by the users -

Before: ~2.2 bricks per second

After: ~20 bricks per second

That is a massive 909% increase in speed! We are now able to duplicate anything without any stutters or freezes in the program.

April 23, 2018 03:59 AM


Kelvin Cho

OSD600 – Release 0.3

So for our final release, I’ve decided to work on a bug on Brave which I previously have experiences working with.

In this release, I have picked this issue#12569 to work on.

So what is the bug?

Well, the bug that they have issued is that the audio indicator will still be on even if the video is over or stopped.

To start off I will explain two things that Brave uses, the first notable thing is that Brave has an audio icon on the tab like this:

The icon here isn’t very special, it has the same functionality as Firefox. The user can choose to mute or unmute the tab if they wish to do so. But another thing it does is when the user has too many tabs open it will switch from the icon to a blue bar to replace the icon as a way to indicate the user that there are currently sounds coming from here.

The bug that is currently happening is the blue bar audio indicator will still remain after the video has been posted or was done playing.

Research

From the information that I have gathered after looking at the bug, it seems to be that if the user mutes the tab itself it will always display the blue bar indicator on top no matter what.

The reason why I believe this was the cause is that of once the tab is muted it doesn’t check if the video is completed or not.

IE:

As you can see the video is clearly over but the mute but the audio is still muted.

The Process of Bug Finding

So from the information that we know, the bug seems to be something that has to do with audio. The first thing I did was typed in:

git grep audio

As you can see we found a lot of results, so let’s just find what we think is useful.

The first thing that caught my eyes was a file name audioState.js, and another thing that caught my eyes was something called audioTabIcon.js.

So far we found two files that sound interesting and may or may not have to do with our bug.

The first file I looked at was the audioTabIcon.js. The js file doesn’t seem to have anything to do with the audio indicator.

So I moved onto the next file: audioState.js.

After looking at this for a couple of hours, I started to look at how the other variable is interacting with this javascript.

Fixing the bug

Interestingly the audioState.js doesn’t really interact with anything that causes it to change. Everything seems to be in this file, so my fix for this bug is to add a condition to check if the audio is muted or not.

If the audio is muted it will not be able to show the blue indicator as there shouldn’t be any sounds coming from that tab. After implementing the changes it seems that the bug has been fixed.

So I decided to write a test file to check if the audio indicator will still show or not.

This small test just checks if the audio is muted or not and check for the audio blue border. The problem is expected to return false as the bar should not be able to appear.

Now that we have added the changes we also are ready to make a PR. The PR request is here.

In conclusion, after finishing this bug, I felt it was very refreshing, I learned a lot more about Brave in general. Like I would not know that these were features that it has. Overall,  I think this was a very interesting bug to do.

by Kelvin Cho at April 23, 2018 03:59 AM


Ilkyu Song

SPO600 Project - Stage 3

I chose Redis (Remote Dictionary Server) for my project at stage1. Redis is open source software developed by Salvatore Sanfilippo, a volatile and persistent key-value store. Then, it stores and manages the data in the memory. Let's look at the benefits and data types of Redis.

1. The advantages of The Redis 


Advantage Description
Specialized for processing data in lists and arrays. • The value supports several data types such as string, list, set, sorted set, hash type.
• List type data entry and deletion are about 10 times faster than MySQL.
Redis transaction is also atomic. Atomic processing provides an Atomic processing function to prevent data mismatch when several processes simultaneously request the same key update.
Persistent data preservation while utilizing memory. • Do not delete data unless explicitly deleted with the command or set expires.
• The snapshot function allows you to save the contents of the memory as *.rdb file and restore it to that point in time.
Multiple server configurations. Consistent hashing or master-slave configuration

2. Redis provides five data types, and there are many processing instructions for each data type.
Data Type  Description
String • We cannot just store a string as a string,
• Binary data can also be saved (note that Redis does not have integer or real numbers).
• The maximum size of data that can be inserted into the key is 512 MB.
List • We can think of it as an array.
• The maximum number of elements in a key is 4,294,967,295.
• If the value of the data type is larger than the condition set in the configuration file, it is encoded as a linked list or zip list.
Set • There is no duplicate data in key with unaligned aggregate type
• The amount of time spent adding, removing, and checking existence is constant regardless of the number of elements in the sets
• The maximum number of elements that can be in a key is 4,294,967,295
Sorted sets • Sorted sets are called the most advanced Redis datatype.
• Adding, removing, and updating elements are done in a very fast way, which is proportional to the "number of elements in the log".
• It can be used in linking systems.
• Each element of sets has a real value called score and is sorted in ascending order by score value.
• There is no redundant data in the key, but the score value can be duplicated
Hashes • Similar to lists, consisting of a series of "field names" and "field values".
• The maximum number of field-value pairs that can be contained in a key is 4,294,967,295.


I compiled the benchmark file for the Redis benchmark by changing the compile options. And benchmarks were done on aarchie and x86 servers with different command counts. The result below is the number of commands executed per second. The aarchie server is a bit faster than the x86 server, although it did not show much difference from the test in stage1. However, the specifications of the two servers are so different that simple comparison is difficult. Some developers and architectures tend to look at performance only with code without considering hardware specs. However, the first stuff to consider when tuning database or optimizing code is the hardware specification.

 1. aarchie 

2. x86







Moreover, I ran the benchmark once again in stage2. I chose the Redis library source in stage 2 and benchmarked it. Then I used the ASM inline assembler to optimize the code. However, ASM does not guarantee optimization over C language. It is better to use c language first for optimization. The two figures below show the result of using the original source and ASM. The two results are very similar.


I am performing Stage3 and thinking about code optimization again. Code optimization is a program conversion technique that improves code by consuming fewer resources (ie, CPU, memory), resulting in faster machine code generation. I think I should remind this meaning. I tried to convert only the code to a simple knowledge what I knew for code optimization. I thought that converting only the code would speed up execution, and I thought that changing the compile options would speed up the program. However, in a simple program, the difference is not so different. I have to keep a few things in mind for code optimization. First of all, I need to know exactly the environment of the OS or platform where my program will run. (Actually, the library which I chose on Stage2 did not run on x86.) And I think I should have a knowledge of the specs of the machine on which my program will run. So, I need to provide the user with the minimum recommended specification for my program. Finally, you should benchmark it repeatedly over and over. To make a good program, I have to test it repeatedly many times. If I follow these three things, I will be able to develop a program that is nearest to optimization. As I proceeded with this course project, I was not only knowledgeable about code optimization, but also experienced. I think in programming as well as coding skills, experience is very important to programmers. This experience will be very beneficial to me. And this project taught me how to perform in the upstream. And code optimization and portability are not simply changing the programming code. I have to be knowledgeable about all operating systems, platforms and hardware.












by Ilkyu Song (noreply@blogger.com) at April 23, 2018 03:38 AM


Ruihui Yan

Optimizing P7ZIP

Because of problems with HandBrake (since it uses many libraries and the FFmpeg), I have decided to tackle another project, a simpler one. I will be working on P7ZIP, which is a command-line version of 7-ZIP. It can be downloaded here.

After downloading the files, we build it by using make all_test :

And now it’s ready to be used. I am using the same file as before as the previous post and compress it to a .zip file.

Comparing the results from three distinct runs, the average runtime is 1 minute and 45 seconds. Here is an example of a run:

Run 1: 1m48.808s

Run 2: 1m45.529s

Run 3: 1m43.264s

Analyzing the source code, I found out that the default optimization flag for gcc is -O, in which “the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.” (Source). Therefore, my plan was to increase the optimization option and write down the results:

 

-O2:

-O3:

To my surprise, not only the higher levels of optimization didn’t make difference, it actually made the runtime longer.

So I tried disabling optimization altogether, by using the flag -O0.

Here is the results:

Run 1: 1m40.043s

Run 2: 1m38.414s

Run 3: 1m41.231s

Surprisingly, removing all the optimizations actually made the program run better. It went down from 1m45s to 1m40s, around 5% faster.

And that concludes this part of optimization.

by blacker at April 23, 2018 03:24 AM


Ray Gervais

Removing the Excess Years from Angular Material’s DatePicker

An OSD700 Contribution Update

So here we are, potentially the last contribution to occur for OSD700 from this developer before the semester ends and marks are finalized. No pressure.

For this round, I wanted to tackle a feature request which I thought would be beneficial for those who utilize the date picker component (a common UI element). The concept is to dynamically remove and add years to the overall date picker based on the min and max date configurations. Sounds rather simple, right? I thought so, but I also had to admit my lack of experience working with the code which dynamically generated the calendar and years portion to this degree before. The inner workings are vastly complex and data driven, which in itself is an interesting design.

The process so far while working on this has been an off and on “hey I get this”, and “I have no idea what to do with the current concepts”. You can see throughout my work in progress the various off and on’s when it comes to understanding, implementing and asking for advice / suggestions which gets us to where we are now. Currently, as I’m writing this, with the help of mmalerba and WizardPC, I have the dynamic year portion working as desired; some artifacts needed to be addressed such as the displayed year range in the header needed to be updated, the years-per-page seem to overlap on the final year if over 24 years gap between min and max, and a potential ‘today’ variable which isn’t always the current date.

There have been many revisions to the code base that I’ve been playing in, often rearranging logic and algorithms to accommodate the four edge cases which are:
1. With no Min / Max provided: the Multi-Year Date Picker behaves as current implementation
2. Only min date provided: Year offset is set to 0, making the min-year the first entry
3. Only max date provided: Year offset is set to a calculated index which equates to max-year being the last entry
4. Both min and max provided: Follows same logic as case 3.

The process of making the first edge and second edge case were relatively painless, this in part also due to the advice and comments left prior to me even writing my first line for this feature set. I’ve included below this that revision and various revisions I had attempted (skipping over the minor changesets) until I finally had the working version a few days later. You can see the progress in my WIP pull request here.

Revision #1 (Min Date Working as Expected)

this._todayYear = this._dateAdapter.getYear(this._dateAdapter.today());
    let activeYear = this._dateAdapter.getYear(this._activeDate);

    // Default Behavior for Offset
    let activeOffset = activeYear % yearsPerPage;

    if (!this._maxDate && this._minDate) {
      activeOffset = 0;
    }

// Whole bunch of wrong logic
...

After I clarified that this was indeed what we wanted for the second use case (min provided), now came the harder algorithmic portion for use case 3 and 4. What I’m working around looks like the following:

Revision #2 (A lot closer to expected logic)

this._todayYear = this._dateAdapter.getYear(this._dateAdapter.today());
    let activeYear = this._dateAdapter.getYear(this._activeDate);

    // Default Behavior for Offset
    let activeOffset = activeYear % yearsPerPage;

    if (!this._maxDate && this._minDate) {
      activeOffset = 0;
    }

    if (this._maxDate) {
      const maxYear = this._dateAdapter.getYear(this._maxDate);
      
      // Keep number positive
      const yearOffset = (activeYear - maxYear) >= 0
        ? activeYear - maxYear
        : (activeYear - maxYear)  * -1;

      // Determine how far to push offset so that max year is at end of page
      const currentYearOffsetFromEnd =  (yearsPerPage / yearOffset) + 1;
      activeOffset = this._minDate ? 0 : currentYearOffsetFromEnd;
    }

The snippet below was the logic which should be followed, at first I thought nothing of it, but I realized that (yearOffset – Math.floor(yearOffset) would 100% return 0.

Revision #3 (Snippet)

const yearOffset = (maxYear - activeYear) / yearsPerPage;
        const currentYearOffsetFromEnd = (yearOffset - Math.floor(yearOffset)) * yearsPerPage;
        const currentYearOffsetFromStart = yearsPerPage - 1 - currentYearOffsetFromEnd;
        // Determine how far to push offset so that max year is at end of page
        // const currentYearOffsetFromEnd =  Math.floor((yearsPerPage / yearOffset)) + 1;
        activeOffset = this._minDate ? currentYearOffsetFromStart : currentYearOffsetFromEnd;

Final Working (Pre Syntax Cleanup)

this._todayYear = this._dateAdapter.getYear(this._dateAdapter.today());
    let activeYear = this._dateAdapter.getYear(this._activeDate);

    // Default Behavior for Offset
    let activeOffset = activeYear % yearsPerPage;

    if (!this._maxDate && this._minDate) {
      activeOffset = 0;
    }

    if (this._maxDate) {
      const maxYear = this._dateAdapter.getYear(this._maxDate);

        const yearOffset = (maxYear - activeYear) / yearsPerPage;
        const currentYearOffsetFromEnd = (yearOffset - Math.floor(yearOffset)) * yearsPerPage;
        const currentYearOffsetFromStart = yearsPerPage - 1 - currentYearOffsetFromEnd;

        activeOffset = this._minDate
          ? currentYearOffsetFromStart
          : (24 % currentYearOffsetFromEnd) - 1;
    }

    this._years = [];
    for (let i = 0, row: number[] = []; i < yearsPerPage; i++) {
      row.push(activeYear - activeOffset + i);
      if (row.length == yearsPerRow) {
        this._years.push(row.map(year => this._createCellForYear(year)));
        row = [];
      }
    }
    this._changeDetectorRef.markForCheck();

Words cannot describe the waves of frustrated “this will never work” monologues and “this is progress” relived exhales occurred during the past week while working on this feature, nor can words describe the amount of dancing-while-no-one-is-around that I did when I finally reached the current implementation. Based on the use cases mentioned above, here is a visual for each:

Case 1: No Min / Max Date Provided

Case 1: Min Date Provided

Case 1: Max Date Provided

Case 1: Both Min / Max Date Provided

I simply cannot explain the thought process which came to the final conclusion, more so I am able to explain the biggest flaw I had in my own thinking. I over thought quite a bit, and more so became overwhelmed with the thought that I would not complete this or the code base was too complex (I will, it’s not). I suppose the time of day I typically worked on this bug didn’t cater well to the mentality while approaching the code, nor my mindset of ‘one more item due’. Once I took the weekend to correct that, and to slowly relearn the task required and the changes (instead of breaking the scope into much bigger unmanageable portions in attempt to ‘get it done’), thoughts and attempts became much clearer.

Whats left? Well, at the time of writing this post I still have to fix the headers, isolate, identify and fix any edge cases which the algorithm doesn’t take into account, and also clean up the code of any useless commented out code. I believe that it can be done, and after the progress today I can happily say that I’m more optimistic than I was on Friday to complete this feature request. I’ve loved contributing, learning what I can through toil and success and also feeling the “I can accomplish” anything high when the pieces finally click. Once I settle down in my new role, I hope to keep contributing both to Angular Material, and new projects which span different disiplines and interests.

by RayGervais at April 23, 2018 03:20 AM


Sanjit Pushpaseelan

SPO600 Stage 3- Final blog

This will be my final blog post for this project. As I mentioned in yesterday’s post, I will be recapping the work I’ve done over the past month and a half and analysising my results.
First off, I would like to start with going over the basic benchmarking I did with MD5deep. I was using a txt file that was 2.4gb large.

real(s) user(s) sys(s)
9.367 8.151 1.450
9.331 8.189 1.383
9.294 8.300 1.230
9.364 8.378 1.225
9.255 8.365 1.226
9.318 8.311 1.239
9.380 8.271 1.334
9.339 8.356 1.218
9.278 8.334 1.167
9.338 8.448 1.125

The average runtime for MD5deep was about 9.3 seconds with a 2.4gb file. Please keep this in mind while I talk about the optimizations I made throughout this project.

Exploring build flags

I started off with trying to mess with the build flags to see if I can improve the runtime of the program. I first noticed that the makefile was O2 to build the program. I decided to try and build with O3 and OFast. My first issue was locating the right makefile. I never realized it at first, but MD5Deep actually had 2 makefiles. The one I needed to edit was in the src folder where all the elf files where stored. I got some interesting results when I finally made the changes though. (If you want to see the original blog post, click here)

O3 RESULTS:

real(s) user(s) sys(s)
9.294 8.243 1.278
9.319 8.188 1.370
9.326 8.046 1.370
9.391 8.049 1.512
9.336 8.279 1.280
9.324 8.362 1.196
9.328 8.256 1.295
9.284 8.145 1.376
9.284 8.278 1.238
9.381 8.285 1.333

OFAST RESULTS:

real(s) user(s) sys(s)
9.435 8.269 1.403
9.391 8.302 1.321
9.304 8.322 1.212
9.328 8.318 1.239
9.288 8.237 1.282
9.297 8.237 1.282
9.297 8.268 1.250
9.265 8.451 1.040
9.302 8.190 1.354
9.287 8.214 1.301

As you can see, my results are pretty close to the original build run. While being a bit lower than the original build, I believe I can chalk that up to variance since I only ran 10 tests. Even then, the difference is so negilible it is not worth mentioning. The question to ask is why does O3 and Ofast not work. This required me to do some more research into these compiler flags. After doing so, I learned that O3 and OFast aren’t guaranteed to actually improve the run times of your code! -O3 turns on the following options along with whatever O2 turns on:

-finline-functions
-funswitch-loops
-fpredictive-commoning
-fgcse-after-reload
-ftree-loop-vectorize
-ftree-loop-distribution
-ftree-loop-distribute-patterns
-floop-interchange
-floop-unroll-and-jam
-fsplit-paths
-ftree-slp-vectorize
-fvect-cost-model
-ftree-partial-pre
-fpeel-loops
-fipa-cp-clone

Ofast turns on all of O3 options along with the following:

-ffast-math

-fstack-arrays

Just because these flags are turned on, doesn’t particularly mean the flags do anything. I wasn’t able to figure out what all these flags  do but it is clear that these flags clearly have no effect on the build. Something I did find is that O3 uses a cmov which causes it to lengthen the loop dependency chain so it can include said cmov. This might have actually caused my build to run slower which would have been an interesting result.

Changing the code

The 2nd thing I tried to do is try and change the code. The first thing I did was try and inline a function which might have reduced the runtime. You can find the original blog post here.

real(s) user(s) sys(s)
9.266 8.394 1.097
9.372 8.316 1.294
9.289 8.347 1.172
9.285 8.308 1.213
9.334 8.274 1.293
9.350 8.265 1.315
9.313 8.251 1.297
9.264 8.337 1.318
9.286 8.267 1.254
9.319 8.255 1.276

Once again, my runtime was pretty similar to my original build. This was pretty simple to explain, the cost of calling the function wasn’t nearly as expensive as I thought it would be meaning that the improvement was negligible.

My 2nd attempt to improve the code was to change some code to remove some loop-varient variables.

(Below you can see the changed code and the original code)

Now what made me curious is that this code crashed when I ran my 2.4gb file but it didn’t crash when I ran smaller files. I was never able to figure out why it didn’t work. I also wasn’t able to find out the cut-off point for file size either. Hopefully I can work on this issue more after I finish up my school work but for now, I have reached a dead end with this problem.

Implementing Assembly Language

My final attempt was to implement assembly language into MD5Deep. You can find the original post here.

#define DISPLAY(x,w) ( __asm__(“ror %%cl,%0″:”=r” (x):”0″ (x),”c” (32- n));)

Above is the code I used to try and improve the runtime.

real(s) user(s) sys(s)
9.278 8.334 1.167
9.338 8.448 1.125
9.303 8.295 1.237
9.342 8.329 1.243
9.308 8.357 1.179
9.318 8.303 1.255
9.366 8.394 1.203
9.327 8.326 1.244
9.367 8.151 1.450
9.331 8.189 1.383

Once again, these runtimes are similar to my original runtimes. I haven’t found a definite answer to why this is the case but I believe it is just due to the fact that the manipulation I am doing is not nearly enough to have an effect on the code. It is a similar case like the function inlining where the improvement I made is negligible that it is not worth mentioning.

 

Final reflection

Despite my results, I had fun working on this project(when i had the time). I learned a bit about the behavior of compile flags and learned that sometimes it just isn’t worth the effort to try and optimize code. Sadly, my efforts were in vain and I was unable to make any significant results out of my time.

by sanjitps at April 23, 2018 02:47 AM


Svitlana Galianova

It's only a beginning

What have I learned for the past 8 months of Open Source Programming?

I was convinced from the very beginning that this course would be important for my career, experience and learning process. But at the same time I was scared: what can I do for a large project in a hype technology company? Am I smart enough? Do I have enough knowledge?

The first thing that I learned is how to feel confident. Before this course when I was getting a new task/assignment either in school or at work, I was stressing out. But now the first thought that pops into my head is "I can do it". Open Source taught me that I didn't have to know everything. In fact nobody can know everything, it is just impossible. The question really is where can I find needed resources? With Google and Stackoverflow (God bless the person who had that amazing idea to connect the community) there is nothing to stress about, everything can be found online.

Another thing that I have learned: programmer is not a person who sits somewhere alone an just comes up with brilliant ideas, it's a community where ideas from hundreds of people are combined and strong software is built. You are not supposed to be scared to ask for help if you need it. Sometimes that one small push/line/idea would start the thought process and another idea would be born.

All those conclusions sound so obvious, but it's so hard to actually believe in that and experience that happiness of being connected with other people or an opportunity to ask for help, if you need it.
I am grateful to Open Source course in Seneca, that showed me the other way software may be built, the way how to stand out from the crowd, how to keep up with new trends in technology and how to be a part of a massive community around the world. Sometimes open source leads to getting a job and it always leads to getting a valuable experience and brushing up your skills. Your Github profile is a real time resume and it really shows that programming is your passion or at least something you enjoy to do in your free time.

I am amazed by how much my personality was changed and how I became more confident as a programmer. I am also not that anxious when I don't have an answer for another question from my manager.

It's not the end of my Open Source "career" for sure, I will still contribute to Mozilla Addons-Frontend project since I enjoy it so much. I think that open source is a great place to maintain your current skills and gain new ones. I am happy that I had the opportunity to learn that Github is so much more than just a version control tool! It is such a trill to see another email notification from the Github, I suddenly am important.





by svitlana.galianova (noreply@blogger.com) at April 23, 2018 01:44 AM


Patrick Godbout

DPS909 Release 03: My Second Open Source Contribution

So after a lengthy but rewarding experience with the completion of my pull request for my first open source contribution, we were now faced with building on this experience by tackling on Release0.3, which involved taking a step up from what we had already accomplished. To do this, I've decided to take on what I considered to be a more difficult bug for the brave browser. 

The bug i've chosen can be found here: https://github.com/brave/browser-laptop/issues/7645

Introduction; Explaining the Bug, Exploring Known Territory


The bug this time around involves the back and forward navigation buttons, more precisely, what happens when you press and hold one of those buttons for a long time. Here's a visual representation of the bug;


When pressing on the navigation buttons and holding the mouse click button, a dropdown list appears with the list of previously visited websites if you clicked and held back, and a list of previously navigated websites if you clicked and held forward. The bug happens when you release the click, and then click the same button again once to perform either the forward or back action. This will accomplish the action, but will keep the dropdown list alive and displayed on your screen, which is un-intuitive given that the list doesn't update after the second action of you clicking back or forward is performed. Therefore, this is a bug, and needs to be fixed.

This became familiar territory as I recalled my previous experience with Release0.2 and dealing with the Brave browser history and its components. This was a huge help at first as it gave me many leads on finding the source of this particular problem. Without too much effort, i've narrowed down the interesting parts of code involving these controls to these following files; 

app\common\state\contextMenuState.js
app\renderer\components\navigation\buttons\backButton.js
app\renderer\reducers\contextMenuReducer.js

Following this is a look into what's interesting in each of those files, including what i've added or modified.

Problem Solving Approach; How to Fix this Bug

The first thing i've done when approaching the problem was understanding how to fix it. In my release0.2, I had not been thorough enough in doing this step. I've looked into other browsers and tried replicating the bug in both Chrome and FireFox. They both did what I expected the behavior should be; when the menu shows up following a long press of the back or forward button, you can click the back or forward button again a single time, and the dropdown list goes away. This became the goal I was aiming for.

I've looked into the call stack of the events that happen when reproducing the bug, and that's what allowed me to narrow down the list of important files to the list mentioned above. Without further ado, here's a more in-depth look at each file;

File: app\common\state\contextMenuState.js


Now, the GitHub issue for this bug includes a comment that mentions a previous fix to a different issue was similar to what needs to be done for this problem. That fix included dealing with the navigation on the hamburger menu of the browser, which is included in the picture linked here. Generally speaking, this part of the code helps to set the context menu details in the case of the hamburger menu, so that when hovering with the mouse over any of the components of the menu, the menu knows to switch from one component to the other. Since we're looking for similar behavior here with the back and forward button, (with the only difference being in the way it is toggled), I have added the similar pieces of code with a new typing variable, (onBackLongPressMenuWasOpen), to indicate whether or not the menu should be displayed below the forward and back buttons.

File: app\renderer\components\navigation\buttons\backButton.js



This file was interesting because it deals with the class that constitutes the back and forward buttons. The methods that are called and should be needed for this fix are within this class as well. If you notice, the onBack method checks with multiple checks whether the previous visited tab is navigable or not. If it is, it clones the tab, renders it active. If not, it makes the current tab active which to the eyes of the user, does nothing. Now the onBackLongPress finds the parent node which helps it identify which component includes the dropdown list or whatever child component it may have. Storing it in the rect variable, it then passes its position coordinates to the onGoBackLong method which handles the action of displaying the menu. This particular method is a part of the next file we will be looking at;

File: app\renderer\reducers\contextMenuReducer.js




For the sake of explanation, this onLongBackHistory and onLongForwardHistory both have this similar structure, the only difference being how they retrieve history objects. Now, an outer if check was added with a type tag attached to the action, to try and prevent the creation of the submenu which happens within the second-most outer if check (line 541).

Once the code detects that there is a history to be shown by the submenu we know of, it creates a menuTemplate which will be sent somewhere else for displaying purposes. This is the part of the code we want to mess around with since we don't always want the menu to be created. To check out overwriting possibilities with the menu, the addition of the tag and detection of the tag within this code prevents the creation of the menu, therefore defaulting it to an empty template which you can find in the else statement on line 587. 

If you notice, line 584 includes the type being set to the typing we declared in the first file mentioned above. This is the tag that allows the setting of the contextmenu to know the difference. On the other hand, when we don't fall into this situation from the if check, line 590 takes care of toggling the type from what we've declared to a 'false', therefore allowing the creation of the submenu once the code hits this method once more.

Examining more Files, and Discussing the Solution

File: app\browser\reducers\tabsReducer.js


When it comes to this file, the APP_ON_GO_BACK and APP_ON_GO_BACK_LONG as well as their FORWARD counterparts are the sections of code being called and methods being deployed once the handling of the back and forward submenus come into play. This file is important because the way the information gets sent to the goBack and onLongBackHistory methods from the files seen above can dictate how the display of these components should be handled before it gets there. 

Discussing the Solution

The solution has not been completed as of now (2018-04-22); there are key steps missing in these files, however the locations where logic needs to be added have been pinpointed as of now. Generally speaking, if the toggling mechanism works and lets the code know how to initialize the components under the forward and back buttons, the rest will follow.

Closing Words

This will be my last post in regards to my Open Source Development class, and as such, I would like to take this chance to speak about how valuable an experience this has been. As a programmer, having had the chance to touch so many concepts of open source development within the short timeframe that is a semester, it has simply been very enriching. 

In regards to our teacher, David Humphrey, we've had a very knowledgeable information bank at our disposal as the course progressed, and will remain one of the important factors that let me expand my knowledge of Open Source development.

In regards to our release assignments which often included pull requests and contributions to open source projects as opposed to our laboratory experiments, they will continue to be hard fact proofs of all of our progress in the class within the open source world. Being able to say we've contributed to major projects is an achievement that will last a long time.

Finally, in regards to the classmates and the way the course was built, the experienced was enhanced further in the form of contributing to our own projects as a base to build onto. These structures are what led us to the possibility of contributing to major projects, and eased our way there as much as they should.

I hope to continue blogging someday about programming as it is a passion and career, and a good reminder and help to others who share that as a career.

by Patrick G (noreply@blogger.com) at April 23, 2018 01:43 AM


Hao Chen

Default behavior

This week in Open Source, I will be tackling this issue.

The cursor position within the QuickOpenInput jumps around when the user is trying to select from the listed options with UP and DOWN arrow keys. Not so convenient if you wish to add more characters to the input.

To start, I want to see what function is being called whenever the UP or DOWN arrow key is pressed. To do this, I set a breakpoint within the HTML page to pause on subtree modification.

I wasn’t able to find a specific function that alters the state of the cursor position. Within the DOM, the input field has two property I wish to observe; selectionStart and selectionEnd. I googled around and found out that the issue of cursor jumping to the beginning and end of an input is actually an intended behavior.

So I tested this out to confirm:

So how do I prevent the default behavior of an event?

Event.preventDefault()

Below is a definition of what this function should do according to W3Schools:

Added the call within the ArrowUp and ArrowDown event.

Voila~


Default behavior was originally published in Haorc on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Hao Chen at April 23, 2018 01:39 AM


Adam Kolodko

Outcomes

The overnight tests of blender showed that the the ‘-finline-functions’ flag was what was causing the ‘perlin()’ function to improve by a factor of twelve. The hope was that an optimizer flag can improve the function without negative effects on the other functions. Unfortunately this was not the case, the -finline-functions flag may have improved performance on the one function but every other function has increased in run time.

Below is the gprof of the finline-functions, the comparison is based the results posted previously.

% cumulative
time seconds name
44.26 1461.83 ccl::QBVH_bvh_intersect_hair(ccl::KernelGlobals*, ccl::Ray const*, ccl::Intersection*, unsigned int, unsigned int*, float, float)
13.56 1909.60 ccl::noise_turbulence(ccl::float3, float, int) [clone .constprop.197]
7.54 2158.54 ccl::QBVH_bvh_intersect_shadow_all_hair(ccl::KernelGlobals*, ccl::Ray const*, ccl::Intersection*, unsigned int, unsigned int, unsigned int*)
7.20 2396.40 GaussianYBlurOperation::executePixel(float*, int, int, void*)
3.56 2513.84 ccl::svm_eval_nodes(ccl::KernelGlobals*, ccl::ShaderData*, ccl::PathState*, ccl::ShaderType, int)
3.05 2614.52 ccl::kernel_path_trace(ccl::KernelGlobals*, float*, int, int, int, int, int)
2.06 2682.40 ccl::shader_setup_from_ray(ccl::KernelGlobals*, ccl::ShaderData*, ccl::Intersection const*, ccl::Ray const*)
1.88 2744.62 ccl::light_sample(ccl::KernelGlobals*, float, float, float, ccl::float3, int, ccl::LightSample*)
1.85 2805.79 ccl::kernel_path_surface_bounce(ccl::KernelGlobals*, ccl::ShaderData*, ccl::float3*, ccl::PathState*, ccl::PathRadianceState*, ccl::Ray*)
1.58 2858.14 GaussianXBlurOperation::executePixel(float*, int, int, void*)
1.03 2892.22 ccl::primitive_tangent(ccl::KernelGlobals*, ccl::ShaderData*)
0.91 2922.42 svbvh_node_stack_raycast(SVBVHNode*, Isect*)
0.91 2952.52 ccl::perlin(float, float, float)

Something to notice, my worry about the optimization causing another function called ‘microfacet_beckmann()’ to be called in place or ‘perlin’ was unfounded.

Another thing to notice is that every other call has increased runtime. This may mean we want to isolate this function and simply inline it on it’s own.

Let’s take a look at this function.

#ifndef __KERNEL_SSE2__
ccl_device_noinline float perlin(float x, float y, float z)
{
int X; float fx = floorfrac(x, &X);
int Y; float fy = floorfrac(y, &Y);
int Z; float fz = floorfrac(z, &Z);

float u = fade(fx);
float v = fade(fy);
float w = fade(fz);

float result;

result = nerp (w, nerp (v, nerp (u, grad (hash (X , Y , Z ), fx , fy , fz ),
grad (hash (X+1, Y , Z ), fx-1.0f, fy , fz )),
nerp (u, grad (hash (X , Y+1, Z ), fx , fy-1.0f, fz ),
grad (hash (X+1, Y+1, Z ), fx-1.0f, fy-1.0f, fz ))),
nerp (v, nerp (u, grad (hash (X , Y , Z+1), fx , fy , fz-1.0f ),
grad (hash (X+1, Y , Z+1), fx-1.0f, fy , fz-1.0f )),
nerp (u, grad (hash (X , Y+1, Z+1), fx , fy-1.0f, fz-1.0f ),
grad (hash (X+1, Y+1, Z+1), fx-1.0f, fy-1.0f, fz-1.0f ))));
float r = scale3(result);

/* can happen for big coordinates, things even out to 0.0 then anyway */
return (isfinite(r))? r: 0.0f;
}
#else
ccl_device_noinline float perlin(float x, float y, float z)
{
ssef xyz = ssef(x, y, z, 0.0f);
ssei XYZ;

ssef fxyz = floorfrac_sse(xyz, &XYZ);

ssef uvw = fade_sse(&fxyz);
ssef u = shuffle(uvw), v = shuffle(uvw), w = shuffle(uvw);

ssei XYZ_ofc = XYZ + ssei(1);
ssei vdy = shuffle(XYZ, XYZ_ofc); // +0, +0, +1, +1
ssei vdz = shuffle(shuffle(XYZ, XYZ_ofc)); // +0, +1, +0, +1

ssei h1 = hash_sse(shuffle(XYZ), vdy, vdz); // hash directions 000, 001, 010, 011
ssei h2 = hash_sse(shuffle(XYZ_ofc), vdy, vdz); // hash directions 100, 101, 110, 111

ssef fxyz_ofc = fxyz - ssef(1.0f);
ssef vfy = shuffle(fxyz, fxyz_ofc);
ssef vfz = shuffle(shuffle(fxyz, fxyz_ofc));

ssef g1 = grad_sse(h1, shuffle(fxyz), vfy, vfz);
ssef g2 = grad_sse(h2, shuffle(fxyz_ofc), vfy, vfz);
ssef n1 = nerp_sse(u, g1, g2);

ssef n1_half = shuffle(n1); // extract 2 floats to a separate vector
ssef n2 = nerp_sse(v, n1, n1_half); // process nerp([a, b, _, _], [c, d, _, _]) -> [a', b', _, _]

ssef n2_second = shuffle(n2); // extract b to a separate vector
ssef result = nerp_sse(w, n2, n2_second); // process nerp([a', _, _, _], [b', _, _, _]) -> [a'', _, _, _]

ssef r = scale3_sse(result);

ssef infmask = cast(ssei(0x7f800000));
ssef rinfmask = ((r & infmask) == infmask).m128; // 0xffffffff if r is inf/-inf/nan else 0
ssef rfinite = andnot(rinfmask, r); // 0 if r is inf/-inf/nan else r
return extract(rfinite);
}
#endif

You can see that this function is divided into a SIMD and non SIMD versions, because this build is X_86 I will assume that it compiled as the SIMD version.

For some reason this function is the no-inline declaration, Im unsure of why this might be the case and if I had the time I would rebuild Blender with only perlin as an inline function.

Unfortunately that would be out of scope as Im just testing optimizer flags in this project. Through sheer brute force it is clear that individual optimization flags aren’t the way to improve performance.

Below is a table of each optimization flag and it’s corresponding effect on blender’s runtime.

Flag  Runtime/Seconds

-O2                             3245.36

-fvect-cost-model               3242.05
-floop-unroll-and-jam           3247.36
-ftree-partial-pre              3247.84
-ftree-loop-distribute-patterns 3251.57
-fsplit-paths                   3252.05
-floop-interchange              3255.06
-ftree-slp-vectorize            3255.77
-ftree-loop-vectorize           3260.45
-fpredictive-commoning          3266.47
-fgcse-after-reload             3275.78
-ftree-loop-distribution        3288.16
-fpeel-loops                    3283.03
-fipa-cp-clone                  3283.68  
-finline-functions              3303.00
-funswitch-loops                3306.21 

-O3                             3350.36

by ahkol at April 23, 2018 01:31 AM


Svitlana Galianova

Release 0.6: More contributions

So it's been another two weeks already.
I am still on the "honeymoon" phase with Mozilla Addons-frontend project. There are always bugs for me to fix. 

My strategy didn't change that much from the previous release:

1) find a bug
2) reproduce a bug
3) find a quickest fix modifying state of the properties right in Google Developer tools
4) try to make the fix more specific for the needed area of code
5) improve my fix, remove redundant code
6) submit PR

The first bug, I have fixed. was caused by my previous PR, so I felt like it was my responsibility to fix it. It was about putting the right icon for the right error message. Originally the green message had an exclamation mark as an icon, but I have changed it to be Mozilla Firefox icon. My fix affected a red message as well, so I have wrote a different SCSS class to handle green message.
screen shot 2018-04-19 at 12 43 59 pmscreen shot 2018-04-19 at 12 45 14 pm
I was going through the list of bugs and I saw a few similar with unbroken strings, it means that when the string is too long, it's not nicely cropped with ellipsis at the end, but it continues to live trough the next containers and only ends on the edge of the browser(as the url on the picture):

longname 

I have not done anything to handle this situation before and I feel like it's something all programmers who touch front-end should know. So I decided to dig in. I have found an issue, solution and submitted another PR

So I learned how to break unbroken strings:
display: block;
overflow: hidden;
text-overflow: ellipsis;

I came across another similar bug and decided to contribute there as well.
So if you are Mozilla Firefox user and you are interested in Addons, the homepage will not be overwhelmed by overflowing strings:

screen shot 2018-04-20 at 11 22 06 pm

For now any of my pull requests are not merged and are pending to get reviewed probably on Monday.






by svitlana.galianova (noreply@blogger.com) at April 23, 2018 01:18 AM


Adam Kolodko

A build for every flag

To find what specific optimization flag affected the ‘prelin()’ function. This was does through brute force, 15 different builds were made using the 15 different O3 flags on top of a normal ‘-pg -O2’ version of Blender.

-finline-functions
-funswitch-loops
-fpredictive-commoning
-fgcse-after-reload
-ftree-loop-vectorize
-ftree-loop-distribution
-ftree-loop-distribute-patterns
-floop-interchange
-floop-unroll-and-jam
-fsplit-paths
-ftree-slp-vectorize
-fvect-cost-model
-ftree-partial-pre
-fpeel-loops
-fipa-cp-clone

Bellow is what it looks like to run 15 concurrent building processes.

Screenshot from 2018-04-21 17-01-18

After the build process was finished a few hours later, I wrote a simple bash script to run the 15 tests and gpof each one. This testing process will take about 7 hours as the image render takes 25 minutes each.

This is a sample of the script

#!/bin/bash

cd
cd ~/blender-git/ftreeDP/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeDP -f 1
gprof ./blender > testFtreeDP

cd
cd ~/blender-git/ftreeS/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeS -f 1
gprof ./blender > testFtreeS.txt

cd
cd ~/blender-git/fsplit/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/fsplit -f 1
gprof ./blender > testFsplit.txt

cd
cd ~/blender-git/floopI/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/floopI -f 1
gprof ./blender > testFloopI.txt

cd
cd ~/blender-git/floopU/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/floopU -f 1
gprof ./blender > testFloopU.txt

cd
cd ~/blender-git/ftreeD/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeD -f 1
gprof ./blender > testFtree.txt

cd
cd ~/blender-git/finline/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/finline -f 1
gprof ./blender > testFinline.txt

cd
cd ~/blender-git/ftreeV/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/ftreeV -f 1
gprof ./blender > testFtreeV.txt

cd
cd ~/blender-git/fpredictive/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/fpredictive -f 1
gprof ./blender > testFpredictive.txt

cd
cd ~/blender-git/fgcse/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/fgcse -f 1
gprof ./blender > testFgcse.txt

cd
cd ~/blender-git/funswitch/bin/
./blender -b ~/blend_cat/fishy_cat.blend -o ~/funswitch -f 1
gprof ./blender > testfunswitch.txt

The logic is as follows

cd //move to home directory to prevent rerunning test in case of invalid directory
cd ~/blender-git/funswitch/bin/ //move to directory meant for flag
./blender -b ~/blend_cat/fishy_cat.blend -o ~/funswitch -f 1 //preform the render in directory, -b is for console, ~/blend_cat/fishy_cat.blend is the file to render, -o ~/funswitch is the output directory for the image, -f 1 means frame 1 as if it was an animation with one image
gprof ./blender > testfunswitch.txt //create gprof text file to examine test results

Next blog post I will evaluate the results and attempt to understand the reason for the changes.

by ahkol at April 23, 2018 01:07 AM


Vimal Raghubir

Fixing Bugs in Kubernetes Website

So in this week’s open source adventures, I decided to tackle some bugs in the Kubernetes Website GitHub repository. This repository can be accessed here. So before I begin discussing the bug fixes that I made in this repository, I would like to highlight the reason I chose this repository specifically. If you have read one of my previous blog posts titled, “Kubernetes”, you will have already known that I am deeply fascinated in Kubernetes as well as other DevOps platforms. Due to this reason, I decided to tackle some bug fixes in Kubernetes regardless of what bug or what language it is in.

After exploring several repositories in Kubernetes, I noticed a trend of having several bugs that required intermediate to expert knowledge of the software/language in order to fix the bug, including the beginner bugs. I have started learning GO, which is the language used behind majority of Kubernetes architecture, as well as experimenting with the application on my own. What I came to realize is its an ongoing learning experience to which I need to build a foundation before I REALLY start to tackle bugs in the application.

Until that time comes, I decided to tackle simpler bugs such as in their website that can be seen here. The first bug fix that I made was to change some documentation that was necessary since it was causing confusion for other developers. My Pull Request can be accessed here. What the bug basically was is that the documentation previously stated that you would need to use the NodePort value provided by accessing your Service’s details, which is incorrect. As stated by the Dick Murray, a contributor to this repository, the NodePort value does not work in conjunction with your external IP address but instead the Port value does.

So the recommended fix was to change the documentation to reflect this. Below is the change on GitHub.

And below is the change on the actual website.

Onto the second bug fix! So for this bug fix, there was some header text that was hiding behind the main header and was only visible on the Safari browser. Although it could only be seen in the Safari browser, the text could still be searched on all browsers as shown below on Safari.

If you cannot make out the header text clearly from this picture, it can be seen clearer in the screenshot below.

Although it isn’t necessarily bothering anyone, it is still a bug that requires cleaning up. So I first had to pinpoint the exact file that this code exists in, but thankfully this was done for me by Zachary Sarah. So the recommendation by Zachary, was to simply remove this navigation bar since it isn’t useful anyways. I made my Pull Request and it has successfully been merged. The changes can be seen below.

This was a fix that was gladly accepted by the developers in this repository as indicated by Brad Topol.

In conclusion, I am absolutely excited to have finally dipped by toes in the water of Kubernetes after several weeks of my open source adventures. This is definitely only the beginning and I am ecstatic to continue fixing bugs in this repository as well as others that Kubernetes has. Not only is the idea of fixing bugs rewarding, but I finally feel like I am making an impact with my developing compared to the other school projects I’ve done. I am beyond thankful for all the lessons I have obtained from my open source professor David Humphrey, and will not put his teachings to waste! Open Source has thought me to think bigger than ever before and that is something that everyone is seeking in all aspects of life.

Once again I do look forward to many more open source adventures and this is the mark of many more to come! See you in my next adventure!

by Vimal Raghubir at April 23, 2018 12:35 AM


Jeffrey Espiritu

SPO600 Project – Stage 3 / Part 3

Inline Assembly Additions I changed the for loop in the FLAC__fixed_compute_best_predictor function from this: to this: FLAC__int32 errors[4]; FLAC__uint32 total_error_0 = 0; register FLAC__uint32 total_error_1 asm("r19") = 0; register FLAC__uint32 total_error_2 asm("r20") = 0; register FLAC__uint32 total_error_3 asm("r21") = 0; register FLAC__uint32 total_error_4 asm("r22") = 0; __asm__ ("movi v10.4s, #0" ::: "v10"); for (i = … Continue reading SPO600 Project – Stage 3 / Part 3

by jespiritutech at April 23, 2018 12:15 AM

April 22, 2018


Dan Epstein

Optimizing & Benchmarking SPO600 Project Stage 3

Recap on Stage 2

Previously on stage 2 I performed multiple benchmark tests on SHA256deep with altered build option of O3 compare to the current implemented one O2. I have tested the time that it takes to encrypt a small sized file of 10mb, 100mb and 1gb. The tests took place on multiple servers with different hardware’s and configurations. The first server is archie, which is equipped with ARMv8 (aarch64) architecture. Then I performed tests on bbetty and charile servers but they have the same architecture but, just with more memory. I have compared the benchmark results between archie and xerxes (x86_64 architecture). Here below is the benchmark results from stage 2. I have only included aarch64 and x86_64 for comparison because they are different architectures types.

10MB

Aarchie64 Server – O2

Process

Time

Real

0m0.092s

User

0m0.066s

Sys

0m0.016s

Xerxes x86_64 Server – O2

Process

Time

Real

0m0.094s

User

0m0.095s

Sys

0m0.006s

Aarchie64 Server – O3

Process

Time

Real

0m0.075s

User

0m0.067s

Sys

0m0.010s

100MB

Xerxes x86_64 Server – O3

Process

Time

Real

0m0.093s

User

0m0.094s

Sys

0m0.004s

Aarchie64 Server – O2

Process

Time

Real

0m0.759s

User

0m0.668s

Sys

0m0.099s

Xerxes x86_64 Server – O2

Process

Time

Real

0m0.861s

User

0m0.882s

Sys

0m0.048s

Aarchie64 Server – O3

Process

Time

Real

0m0.705s

User

0m0.662s

Sys

0m0.060s

1GB

Xerxes x86_64 Server – O3

Process

Time

Real

0m0.864s

User

0m0.870s

Sys

0m0.062s

Aarchie64 Server – O2

Process

Time

Real

0m7.690s

User

0m6.799s

Sys

0m1.013s

Xerxes x86_64 Server – O2

Process

Time

Real

0m8.762s

User

0m8.946s

Sys

0m0.490s

Aarchie64 Server – O3

Process

Time

Real

0m7.170s

User

0m6.698s

Sys

0m0.655s

Xerxes x86_64 Server – O3

Process

Time

Real

0m8.790s

User

0m8.938s

Sys

0m0.535s

Reflection

The fastest server based on the results is archie64 with the improvement of 5.88% when encrypting a 1gb file using O3 flag. There is almost no difference when encrypting a small-sized file. For Xerxes, there the time decrease in about 0.38% for the 1gb file encryption. I’ve noticed while using a larger file, the O3, seems to be a better option to use. Unfortunately, there is not much of an improvement for small size files for all of the servers.

- %
Aarchie64 Server
- %
Xerxes x86_64

Then, in the next part of stage 2 I wanted to further optimize this project’s function: sha256_update (sha256deep). Sadly I couldn’t, because the code already seems to be optimized. The reason I believe this function is already optimized is that during my research I have found that memcpy is the fastest copy method in C. The other alternative to replace memcpy is to use inline assembler, which could be more efficient because you have more control over the data that is copied. The other factor that this code seems to be mostly optimized is that it’s using the right data types (uni8_t & unit32_t), which they are best used for storing small values.

Summary

As I have mentioned on my previous blog, I think this could be better optimized by using inline assembler to replace the memcpy function but, since I don’t have much experience with assembler language this couldn’t be implemented. The whole project experience was tough and challenging but, I do feel I have learned a lot from it. The hardest part was to find a function that could be potentially optimized. I’ve learned many techniques on how to evaluate a function and try to optimize it using the optimization that we learned in class, different build options and software profiling. However, I needed more experience and practice in order to fully optimize this project (to use inline assembler).

I have decided not to submit a pull request to the hashdeep project repository because it seems that it will take a very long time to get a response or for the changes to get accepted (there are pull requests that are still pending since February 2018). Therefore, there won’t be enough time to get this changes accepted and to report back. Overall, this was a great experience and hopefully, these blogs could help any students who will take SPO600 in the future.

by Dan at April 22, 2018 10:51 PM