Planet CDOT

October 16, 2017


Jiyoung Bae

SPO600: LAB6 (Algorithm Selection Lab)

To optimize volume, three options were performed for this lab. All of them used 0.75 for the volume factor, and 500 million random data were used. Command time and /usr/bin/time was used to check its details, but loss time was found between the total time and the sum of the user and system time.

1. Volume_out = sample data * volume_factor : This method was simple but slow due to float multiplication and two types of conversions occurred.

lab6-1Irene.PNG

2. using lookup table: In this option, multiplication happened while creating the lookup table so that it was faster than the previous option.

lab6-2Irene.PNG

3. using fixed-point: the volume factor was converted to fixed point integer as a setup, and then multiplied by a binary as the actual calculation. As you see below, this method was much faster than the previous versions.

lab6-3Irene.PNG

Last, option O3 for the compile was used to compare the time after optimization, resulted in shorter time.

lab6Irene_O3.PNG


by Irene Bae at October 16, 2017 03:38 AM

October 15, 2017


Michael Pierre

First Bug Release Mozilla Thimble

After diving into the inner workings of Thimble/Brackets and getting two pull requests successfully merged with the Thimble master branch I can successfully say this is my first release. If you haven’t read my previous post (link) the bug I fixed was actually created from another issue filed. My original bug was to make the no index found message appear when the user is updating an existing project without an index.html. A pull request has since been filed for that bug however it is still waiting for review. Before I could even fix that bug I noticed that I couldn’t create a new project on my localhost version of Thimble. This was odd because I saw no errors indicating that something was wrong with the Thimble server. After some questions asked on GitHub I figured out that you need to run the Brackets and Thimble server at the same time which was something I wasn’t doing initially. David Humphrey then filed a bug to add a feature that would notify the developer if the Brackets server wasn’t running which turned out to be a great starter bug.

The bug fix required a Node.js library called is-reachable which gives the ability to check if a certain server is reachable or not. This library was fairly simple to use and the developer who created it did an excellent job describing how to install and use the function. release 0.1 3 After adapting the example included on the is-reachable GitHub page I had a functioning prototype that would display if the Brackets server was running or not. release 0.1 4Once I filed a pull request I was met with three or so requests for changes in my code. Things like changing the brackets layout and fixing my node.js require routing were just some of the things that were requested. In the end the requests will probably help me for future pull requests to not make the same mistakes again so I’m glad I got some help from the community. After I fixed those changes and committed my updated code I got another request for changes but not from a person this time. GitHub has these tests called Travis CI and AppVeyor that act as automated tests to make sure your code follows the master projects coding style and syntax. release 0.1 5For example the Travis CI test wanted me to change my quotations from single quotes to double quotes when identifying Id’s with jQuery to match how they are declared in the rest of the project. I found the automated tests to be really interesting because they eliminate so much time in terms of actually having someone to go through all your code to fix linting and other code structure. For my particular case I had a bunch of linting issues pointed out by Travis CI and it was easily fixed by running a console lint fix command under the projects path. It’s truly impressive how some parts of the development process and especially when it comes to open source are automated now.

Eventually I got my fix up to standards with what Mozilla Thimble was looking for and  my code got merged with the master project. It was a great feeling being able to contribute to something big as well as knowing that my fix will help future new developers get into working on Thimble. What I didn’t know however was that my fix that was aimed to make the development process easier actually resulted in another bug that I had to fix. Since my bug fix uses a third party node.js library called is-reachable I had to download and install it with npm install to be able to use its functionality in my code. This means that users who do not have is-reachable installed will get an error saying that they are missing a module that is needed for the code to run. release 0.1 6

As a result an issue was filed by Mozilla Thimble member Gideon Thomas to add is-reachable to the devDependencies section of package.json. This makes it so is-reachable is automatically installed when the user types the command npm install. Though this bug wasn’t as big as the previous one it was still interesting how sometimes fixing something leads to other things becoming broken which makes me wonder if thats why some of the bugs filed haven’t been fixed in years. release 0.1 7.JPG

Overall I had a really good time working on these bugs and I will definitely be taking on more in the future and maybe even different open source projects to see what I can accomplish. Thanks to the Thimble team for helping me along the way!

 


by michaelpierreblog at October 15, 2017 08:23 PM

October 14, 2017


Eric Schvartzman

Getting Involved In Open Source - Thimble


How I got into Open Source Development

For a long time now I've wanted to get involved in the Open Source community on Github, but I never got around to doing that. The idea was to pick a project I like, add cool new features to it, and everything would be great. As the months went by the school assignments started pilling up and it always seemed like their was another test I had to prepare for. Eventually the idea of contributing to an open source project started to seem unrealistic. The more I learned about writing software, the more I thought it would be very difficult to make changes in a code base that has thousands of lines of code across several different directories. I felt lost and unqualified to take on such a responsibility. That was the WRONG way to approach Open Source Software. The problem was not the number of lines of code or even the complexity of the project, it was the approach that I was trying to take. As a person new to Open Source, instead of viewing it as a means of building something new it is better to look at it from the perspective of a detective. You need to find out if their are any bugs within the project and see if the community is looking for contributors to fix those bugs. On top of that you have to look at whether the community working on the project is friendly, open to new comers regardless of experience, and has multiple people actively working on the project. I was introduced to this approach in my Topics in Open Source Development course at Seneca College by my professor David Humphrey. It was such a straight forward but eye opening proposition that I had no idea this was the practical way of getting your feet wet in Open Source Software. It can be very daunting at the beginning, and that was true for me, but the only way you truly learn is by jumping in and getting your hands dirty. How I got involved in the Open Source project Thimble will be explained in the next section.

Why I choose to contribute to Thimble

Thimble is an online code editor that is open sourced and is created by the Mozilla Foundation in partnership with CDOT at Seneca College. The Open Source community working on Thimble is very welcoming of new comers as it can be seen on their website where they state
Contributions of all skills and backgrounds are welcome. You don't need to be an expert programmer, in fact, over half of our contributors identify themselves as students.
They have several main contributors who actively maintain and improve the software so you can expect to receive a response from them in a relatively short period of time. When I began looking for a bug to fix on Thimble I commented on one asking to take on the responsibility of solving the issue and within an hour I got a response! It was Luke Pacholski, one of the maintainers of Thimble, who gave me the green light to work on the bug. I was thrilled to be acknowledged by one of the main people working on Thimble and I felt a sense of pride in being able to make my original dreams a reality.

How I solved my bug

The bug that I decided to fix was a UI related issue where both the inline editor and the console had different icons for the close button. My job was to update the close button in the inline editor so that it matches with the one in the console. The issue can be found on the Thimble github repo at this link https://github.com/mozilla/thimble.mozilla.org/issues/2177. It sounded like an easy bug to fix so I began diving into the problem and followed the instructions on the Thimble Github repository for setting up the development environment. Next step was finding the location of the bug in the source code...boy was I in for a surprise. For those who've never worked with the Thimble project before it can be quiet daunting to navigate through the code base. For me it was tricky to locate where to make changes because the issue was also related to a separate repository that Thimble depends on. That second repository is a modified version of Brackets called Bramble and it is the text editor embedded within Thimble. This version of Brackets was updated so it could run within browsers, an amazing feat that allows Thimble to be cross platform independent. The result of this fusion can be seen in the image below:



Once I opened the folder containing the source code for Brackets the next step was to locate the line/s of code that was causing the bug. Luckily for me, someone had already attempted to solve the issue and the person commented where she found the location of the JavaScript file that creates the "X" icon. The code was located in "src/editor/InlineWidget.js" and all I had to do was remove ×. Before making any changes I had to leave this code alone untill I could find the CSS styling for the "X" button, otherwise Thimble would not display the button if the icon was removed. The next step was to locate the file containing the CSS styling.

In order to locate the CSS file I had to run the Thimble application and use the Chrome Web inspector to locate the element on the web page that calls the file. Once I found the close button in the inline editor I looked up the style tag that would link to the file housing the styling for it. What I found left me confused because I never saw a style tag without an href attribute. This is what I found in the Chrome Web inspector:

<style type="text/css" id="less:src-styles-bramble">...</style>

After doing a bit of digging in the projects folder I realized that less:src-styles-bramble referred to the location of the file containing the CSS styling. When I found the file it was named "brackets.less" and that explained the lack of an href. After doing some quick research I learned that the "less" extension refers to a CSS pre-processor known as Less.js, which runs in Node. Once I made the changes to the CSS styling I finally had the result Luke Pacholski was looking for.

Old


New


Final Thoughts

Although I finished making the changes based on the issue requirements I still did not get my pull request merged. I was asked to make minor changes to the placement of the "X" icon and so I did what was asked and now I am waiting for a response from the Thimble community regarding whether my new pull request will get merged. Overall the experience has been enjoyable and I am looking forward to making more contributions to the Thimble project. For any new comers to Open Source if you feel overwhelmed by the scope of the projects on Github don't let that discourage you. Remember to look at the community involved to see if they are welcoming of newcomers and always try to connect with them on their IRC channels. Most importantly, learn how to get your hands dirty by picking a bug that you think is simple enough for your first try and see it through until you solve the issue.

by Eric S (noreply@blogger.com) at October 14, 2017 08:07 PM


Chun Sing Lam

SPO600 Lab 5 – SIMD and Auto-Vectorization

SIMD instructions and vectorization

Vectorization refers to a compiler unrolling a loop combined with generating SIMD instructions. Each SIMD (Single Instruction Multiple Data) instruction operates on more than one data element at a time, so a loop can run more efficiently. With auto-vectorization, the compiler can identify and optimize some loops on its own, which means it can automatically vectorize a loop. Aarch64 has 32 128-bit wide vector registers that SIMD instructions use and they are named V0 to V31. You can refer to the ARM manual for more information about SIMD instructions and vector registers.

Writing vectorizable code and enabling auto-vectorization

For this lab, I need to write a program that fills two 1000-element integer arrays with random numbers between -1000 and 1000, sums these two arrays element-by-element to a third array, and calculates the sum of all elements in the third array and prints the result. Here is my program that accomplishes these tasks without considering vectorization:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

#define RANDNUM 1000

int main(void)
{
 // Declare variables
 int array1[RANDNUM], array2[RANDNUM], array3[RANDNUM];
 int i, minNum = -1000, maxNum = 1000, sum = 0;

// Randomize seed
 srand(time(NULL));

for (i = 0; i < RANDNUM; i++) {
 // Store random numbers in two arrays
 array1[i] = minNum + rand() % (maxNum + 1 - minNum);
 array2[i] = minNum + rand() % (maxNum + 1 - minNum);

// Sum array elements into third array
 array3[i] = array1[i] + array2[i];

// Sum of third array elements
 sum += array3[i];
 }

// Display sum of third array elements
 printf("Sum of all elements in the third array is: %d\n", sum);
 return 0;
}

I use the command “gcc -O0 lab5.c -o lab5” to compile my program with no optimization using the -O0 option. Here is the disassembly output for the section <main> using the “objdump -d” command:

0000000000400684 <main>:
 400684: d285e010 mov x16, #0x2f00 // #12032
 400688: cb3063ff sub sp, sp, x16
 40068c: a9007bfd stp x29, x30, [sp]
 400690: 910003fd mov x29, sp
 400694: 12807ce0 mov w0, #0xfffffc18 // #-1000
 400698: b92ef7a0 str w0, [x29,#12020]
 40069c: 52807d00 mov w0, #0x3e8 // #1000
 4006a0: b92ef3a0 str w0, [x29,#12016]
 4006a4: b92efbbf str wzr, [x29,#12024]
 4006a8: d2800000 mov x0, #0x0 // #0
 4006ac: 97ffff99 bl 400510 <time@plt>
 4006b0: 97ffffac bl 400560 <srand@plt>
 4006b4: b92effbf str wzr, [x29,#12028]
 4006b8: 14000038 b 400798 <main+0x114>
 4006bc: 97ffff9d bl 400530 <rand@plt>
 4006c0: 2a0003e1 mov w1, w0
 4006c4: b96ef3a0 ldr w0, [x29,#12016]
 4006c8: 11000402 add w2, w0, #0x1
 4006cc: b96ef7a0 ldr w0, [x29,#12020]
 4006d0: 4b000040 sub w0, w2, w0
 4006d4: 1ac00c22 sdiv w2, w1, w0
 4006d8: 1b007c40 mul w0, w2, w0
 4006dc: 4b000021 sub w1, w1, w0
 4006e0: b96ef7a0 ldr w0, [x29,#12020]
 4006e4: 0b000022 add w2, w1, w0
 4006e8: b9aeffa0 ldrsw x0, [x29,#12028]
 4006ec: d37ef400 lsl x0, x0, #2
 4006f0: 914007a1 add x1, x29, #0x1, lsl #12
 4006f4: 913d4021 add x1, x1, #0xf50
 4006f8: b8206822 str w2, [x1,x0]
 4006fc: 97ffff8d bl 400530 <rand@plt>
 400700: 2a0003e1 mov w1, w0
 400704: b96ef3a0 ldr w0, [x29,#12016]
 400708: 11000402 add w2, w0, #0x1
 40070c: b96ef7a0 ldr w0, [x29,#12020]
 400710: 4b000040 sub w0, w2, w0
 400714: 1ac00c22 sdiv w2, w1, w0
 400718: 1b007c40 mul w0, w2, w0
 40071c: 4b000021 sub w1, w1, w0
 400720: b96ef7a0 ldr w0, [x29,#12020]
 400724: 0b000022 add w2, w1, w0
 400728: b9aeffa0 ldrsw x0, [x29,#12028]
 40072c: d37ef400 lsl x0, x0, #2
 400730: 913ec3a1 add x1, x29, #0xfb0
 400734: b8206822 str w2, [x1,x0]
 400738: b9aeffa0 ldrsw x0, [x29,#12028]
 40073c: d37ef400 lsl x0, x0, #2
 400740: 914007a1 add x1, x29, #0x1, lsl #12
 400744: 913d4021 add x1, x1, #0xf50
 400748: b8606821 ldr w1, [x1,x0]
 40074c: b9aeffa0 ldrsw x0, [x29,#12028]
 400750: d37ef400 lsl x0, x0, #2
 400754: 913ec3a2 add x2, x29, #0xfb0
 400758: b8606840 ldr w0, [x2,x0]
 40075c: 0b000022 add w2, w1, w0
 400760: b9aeffa0 ldrsw x0, [x29,#12028]
 400764: d37ef400 lsl x0, x0, #2
 400768: 910043a1 add x1, x29, #0x10
 40076c: b8206822 str w2, [x1,x0]
 400770: b9aeffa0 ldrsw x0, [x29,#12028]
 400774: d37ef400 lsl x0, x0, #2
 400778: 910043a1 add x1, x29, #0x10
 40077c: b8606820 ldr w0, [x1,x0]
 400780: b96efba1 ldr w1, [x29,#12024]
 400784: 0b000020 add w0, w1, w0
 400788: b92efba0 str w0, [x29,#12024]
 40078c: b96effa0 ldr w0, [x29,#12028]
 400790: 11000400 add w0, w0, #0x1
 400794: b92effa0 str w0, [x29,#12028]
 400798: b96effa0 ldr w0, [x29,#12028]
 40079c: 710f9c1f cmp w0, #0x3e7
 4007a0: 54fff8ed b.le 4006bc <main+0x38>
 4007a4: 90000000 adrp x0, 400000 <_init-0x4d8>
 4007a8: 91220000 add x0, x0, #0x880
 4007ac: b96efba1 ldr w1, [x29,#12024]
 4007b0: 97ffff70 bl 400570 <printf@plt>
 4007b4: 52800000 mov w0, #0x0 // #0
 4007b8: a9407bfd ldp x29, x30, [sp]
 4007bc: d285e010 mov x16, #0x2f00 // #12032
 4007c0: 8b3063ff add sp, sp, x16
 4007c4: d65f03c0 ret

The disassembly output above contains 81 lines of instructions.

Now, I use the command “gcc -O3 lab5.c -o lab5a” to compile my program with a lot of optimization using the -O3 option. The -O3 option enables a lot of optimization and enables auto-vectorization. Here is the disassembly output for the section <main>:

0000000000400580 <main>:
 400580: a9bc7bfd stp x29, x30, [sp,#-64]!
 400584: d2800000 mov x0, #0x0 // #0
 400588: 910003fd mov x29, sp
 40058c: a9025bf5 stp x21, x22, [sp,#32]
 400590: 529a9c75 mov w21, #0xd4e3 // #54499
 400594: a90153f3 stp x19, x20, [sp,#16]
 400598: 72a83015 movk w21, #0x4180, lsl #16
 40059c: f9001bf7 str x23, [sp,#48]
 4005a0: 52807d13 mov w19, #0x3e8 // #1000
 4005a4: 5280fa34 mov w20, #0x7d1 // #2001
 4005a8: 52800017 mov w23, #0x0 // #0
 4005ac: 97ffffd9 bl 400510 <time@plt>
 4005b0: 97ffffec bl 400560 <srand@plt>
 4005b4: 97ffffdf bl 400530 <rand@plt>
 4005b8: 2a0003f6 mov w22, w0
 4005bc: 97ffffdd bl 400530 <rand@plt>
 4005c0: 9b357c03 smull x3, w0, w21
 4005c4: 71000673 subs w19, w19, #0x1
 4005c8: 9b357ec2 smull x2, w22, w21
 4005cc: 9369fc63 asr x3, x3, #41
 4005d0: 4b807c63 sub w3, w3, w0, asr #31
 4005d4: 9369fc42 asr x2, x2, #41
 4005d8: 4b967c42 sub w2, w2, w22, asr #31
 4005dc: 1b148060 msub w0, w3, w20, w0
 4005e0: 1b14d842 msub w2, w2, w20, w22
 4005e4: 0b000040 add w0, w2, w0
 4005e8: 511f4000 sub w0, w0, #0x7d0
 4005ec: 0b0002f7 add w23, w23, w0
 4005f0: 54fffe21 b.ne 4005b4 <main+0x34>
 4005f4: 2a1703e1 mov w1, w23
 4005f8: 90000000 adrp x0, 400000 <_init-0x4d8>
 4005fc: 911f8000 add x0, x0, #0x7e0
 400600: 97ffffdc bl 400570 <printf@plt>
 400604: 52800000 mov w0, #0x0 // #0
 400608: f9401bf7 ldr x23, [sp,#48]
 40060c: a94153f3 ldp x19, x20, [sp,#16]
 400610: a9425bf5 ldp x21, x22, [sp,#32]
 400614: a8c47bfd ldp x29, x30, [sp],#64
 400618: d65f03c0 ret
 40061c: 00000000 .inst 0x00000000 ; undefined

The disassembly output above contains 40 lines of instructions, which is about half the amount of instructions compared to the first case. This is an indication that optimization has occurred. Auto-vectorization is enabled but the disassembly output does not contain SIMD instructions, which means that the code is not vectorized.

I need to change my code in order for it to become vectorizable. Instead of using one for loop,  I will divide it into three for loops. The first loop stores random numbers into the two arrays. The second loop sums these two arrays element-by-element to a third array. The third loop calculates the sum of all of the elements in the third array. Here is my program with vectorizable code:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

#define RANDNUM 1000

int main(void)
{
 // Declare variables
 int array1[RANDNUM], array2[RANDNUM], array3[RANDNUM];
 int i, minNum = -1000, maxNum = 1000, sum = 0;

// Randomize seed
 srand(time(NULL));

// Store random numbers in two arrays
 for (i = 0; i < RANDNUM; i++) {
 array1[i] = minNum + rand() % (maxNum + 1 - minNum);
 array2[i] = minNum + rand() % (maxNum + 1 - minNum);
 }

// Sum array elements into third array
 for (i = 0; i < RANDNUM; i++) {
 array3[i] = array1[i] + array2[i];
 }

// Sum of third array elements
 for (i = 0; i < RANDNUM; i++) {
 sum += array3[i];
 }

// Display sum of third array elements
 printf("Sum of all elements in the third array is: %d\n", sum);
 return 0;

I use the command “gcc -O0 lab5b.c -o lab5b” to compile my program with no optimization using the -O0 option. Here is the disassembly output for the section <main>:

0000000000400684 <main>:
 400684: d285e010 mov x16, #0x2f00 // #12032
 400688: cb3063ff sub sp, sp, x16
 40068c: a9007bfd stp x29, x30, [sp]
 400690: 910003fd mov x29, sp
 400694: 12807ce0 mov w0, #0xfffffc18 // #-1000
 400698: b92ef7a0 str w0, [x29,#12020]
 40069c: 52807d00 mov w0, #0x3e8 // #1000
 4006a0: b92ef3a0 str w0, [x29,#12016]
 4006a4: b92efbbf str wzr, [x29,#12024]
 4006a8: d2800000 mov x0, #0x0 // #0
 4006ac: 97ffff99 bl 400510 <time@plt>
 4006b0: 97ffffac bl 400560 <srand@plt>
 4006b4: b92effbf str wzr, [x29,#12028]
 4006b8: 14000023 b 400744 <main+0xc0>
 4006bc: 97ffff9d bl 400530 <rand@plt>
 4006c0: 2a0003e1 mov w1, w0
 4006c4: b96ef3a0 ldr w0, [x29,#12016]
 4006c8: 11000402 add w2, w0, #0x1
 4006cc: b96ef7a0 ldr w0, [x29,#12020]
 4006d0: 4b000040 sub w0, w2, w0
 4006d4: 1ac00c22 sdiv w2, w1, w0
 4006d8: 1b007c40 mul w0, w2, w0
 4006dc: 4b000021 sub w1, w1, w0
 4006e0: b96ef7a0 ldr w0, [x29,#12020]
 4006e4: 0b000022 add w2, w1, w0
 4006e8: b9aeffa0 ldrsw x0, [x29,#12028]
 4006ec: d37ef400 lsl x0, x0, #2
 4006f0: 914007a1 add x1, x29, #0x1, lsl #12
 4006f4: 913d4021 add x1, x1, #0xf50
 4006f8: b8206822 str w2, [x1,x0]
 4006fc: 97ffff8d bl 400530 <rand@plt>
 400700: 2a0003e1 mov w1, w0
 400704: b96ef3a0 ldr w0, [x29,#12016]
 400708: 11000402 add w2, w0, #0x1
 40070c: b96ef7a0 ldr w0, [x29,#12020]
 400710: 4b000040 sub w0, w2, w0
 400714: 1ac00c22 sdiv w2, w1, w0
 400718: 1b007c40 mul w0, w2, w0
 40071c: 4b000021 sub w1, w1, w0
 400720: b96ef7a0 ldr w0, [x29,#12020]
 400724: 0b000022 add w2, w1, w0
 400728: b9aeffa0 ldrsw x0, [x29,#12028]
 40072c: d37ef400 lsl x0, x0, #2
 400730: 913ec3a1 add x1, x29, #0xfb0
 400734: b8206822 str w2, [x1,x0]
 400738: b96effa0 ldr w0, [x29,#12028]
 40073c: 11000400 add w0, w0, #0x1
 400740: b92effa0 str w0, [x29,#12028]
 400744: b96effa0 ldr w0, [x29,#12028]
 400748: 710f9c1f cmp w0, #0x3e7
 40074c: 54fffb8d b.le 4006bc <main+0x38>
 400750: b92effbf str wzr, [x29,#12028]
 400754: 14000012 b 40079c <main+0x118>
 400758: b9aeffa0 ldrsw x0, [x29,#12028]
 40075c: d37ef400 lsl x0, x0, #2
 400760: 914007a1 add x1, x29, #0x1, lsl #12
 400764: 913d4021 add x1, x1, #0xf50
 400768: b8606821 ldr w1, [x1,x0]
 40076c: b9aeffa0 ldrsw x0, [x29,#12028]
 400770: d37ef400 lsl x0, x0, #2
 400774: 913ec3a2 add x2, x29, #0xfb0
 400778: b8606840 ldr w0, [x2,x0]
 40077c: 0b000022 add w2, w1, w0
 400780: b9aeffa0 ldrsw x0, [x29,#12028]
 400784: d37ef400 lsl x0, x0, #2
 400788: 910043a1 add x1, x29, #0x10
 40078c: b8206822 str w2, [x1,x0]
 400790: b96effa0 ldr w0, [x29,#12028]
 400794: 11000400 add w0, w0, #0x1
 400798: b92effa0 str w0, [x29,#12028]
 40079c: b96effa0 ldr w0, [x29,#12028]
 4007a0: 710f9c1f cmp w0, #0x3e7
 4007a4: 54fffdad b.le 400758 <main+0xd4>
 4007a8: b92effbf str wzr, [x29,#12028]
 4007ac: 1400000b b 4007d8 <main+0x154>
 4007b0: b9aeffa0 ldrsw x0, [x29,#12028]
 4007b4: d37ef400 lsl x0, x0, #2
 4007b8: 910043a1 add x1, x29, #0x10
 4007bc: b8606820 ldr w0, [x1,x0]
 4007c0: b96efba1 ldr w1, [x29,#12024]
 4007c4: 0b000020 add w0, w1, w0
 4007c8: b92efba0 str w0, [x29,#12024]
 4007cc: b96effa0 ldr w0, [x29,#12028]
 4007d0: 11000400 add w0, w0, #0x1
 4007d4: b92effa0 str w0, [x29,#12028]
 4007d8: b96effa0 ldr w0, [x29,#12028]
 4007dc: 710f9c1f cmp w0, #0x3e7
 4007e0: 54fffe8d b.le 4007b0 <main+0x12c>
 4007e4: 90000000 adrp x0, 400000 <_init-0x4d8>
 4007e8: 91230000 add x0, x0, #0x8c0
 4007ec: b96efba1 ldr w1, [x29,#12024]
 4007f0: 97ffff60 bl 400570 <printf@plt>
 4007f4: 52800000 mov w0, #0x0 // #0
 4007f8: a9407bfd ldp x29, x30, [sp]
 4007fc: d285e010 mov x16, #0x2f00 // #12032
 400800: 8b3063ff add sp, sp, x16
 400804: d65f03c0 ret

The disassembly output above contains 97 lines of instructions. We get more instructions than the first case with one loop, which is as expected since we now have three loops. Also as expected, the disassembly output does not contain SIMD instructions since auto-vectorization is not enabled.

Now, I use the command “gcc -O3 lab5b.c -o lab5c” to compile my program with a lot of optimization using the -O3 option. Here is the disassembly output with my bolded comments for the section <main>:

0000000000400580 <main>:
// main() function
 400580: d285e410 mov x16, #0x2f20 // #12064
 400584: cb3063ff sub sp, sp, x16 // stack pointer - x16
 400588: d2800000 mov x0, #0x0 // #0
 40058c: a9007bfd stp x29, x30, [sp] // store x29 and x30 to stack pointer address
 400590: 910003fd mov x29, sp // move stack pointer to x29
 400594: a90153f3 stp x19, x20, [sp,#16] // store x19 and x20 to stack pointer address with offset
 400598: 529a9c74 mov w20, #0xd4e3 // #54499
 40059c: a9025bf5 stp x21, x22, [sp,#32] // store x21 and x22 to stack pointer address with offset
 4005a0: 72a83014 movk w20, #0x4180, lsl #16 // move value to w20
 4005a4: f9001bf7 str x23, [sp,#48] // store x23 to stack pointer address with offset
 4005a8: 910103b6 add x22, x29, #0x40 // x29 + 64 and store in x22
 4005ac: 913f83b5 add x21, x29, #0xfe0 // x29 + 4064 and store in x21
 4005b0: 5280fa33 mov w19, #0x7d1 // #2001
 4005b4: d2800017 mov x23, #0x0 // #0
 4005b8: 97ffffd6 bl 400510 <time@plt> // call time subroutine
 4005bc: 97ffffe9 bl 400560 <srand@plt> // call srand subroutine
// first loop
// array1[i] = minNum + rand() % (maxNum + 1 - minNum)
 4005c0: 97ffffdc bl 400530 <rand@plt> // call rand subroutine
 4005c4: 9b347c01 smull x1, w0, w20 // w0 * w20 and store in x1
 4005c8: 9369fc21 asr x1, x1, #41 // shift x1 value right by 41 bits
 4005cc: 4b807c21 sub w1, w1, w0, asr #31 // subtract shifted register
 4005d0: 1b138020 msub w0, w1, w19, w0 // multiply and subtract
 4005d4: 510fa000 sub w0, w0, #0x3e8 // subtract
 4005d8: b8376ac0 str w0, [x22,x23] // store w0 to an address
// array2[i] = minNum + rand() % (maxNum + 1 - minNum)
 4005dc: 97ffffd5 bl 400530 <rand@plt> // call rand subroutine
 4005e0: 9b347c01 smull x1, w0, w20 // w0 * w20 and store in x1
 4005e4: 9369fc21 asr x1, x1, #41 // shift x1 value right by 41 bits
 4005e8: 4b807c21 sub w1, w1, w0, asr #31 // subtract shifted register
 4005ec: 1b138020 msub w0, w1, w19, w0 // multiply and subtract
 4005f0: 510fa000 sub w0, w0, #0x3e8 // subtract
 4005f4: b8376aa0 str w0, [x21,x23] // store w0 to an address
// loop if i < RANDNUM
 4005f8: 910012f7 add x23, x23, #0x4 // x23 + 4 and store in x23
 4005fc: f13e82ff cmp x23, #0xfa0 // test if x23 = 4000
 400600: 54fffe01 b.ne 4005c0 <main+0x40> // repeat first loop if x23 not equal 4000
 400604: d283f002 mov x2, #0x1f80 // #8064
 400608: 8b0203a1 add x1, x29, x2 // x29 + x2 and store in x1
 40060c: d2800000 mov x0, #0x0 // #0
// second loop
// array3[i] = array1[i] + array2[i];
 400610: 3ce06ac0 ldr q0, [x22,x0] // load register
 400614: 3ce06aa1 ldr q1, [x21,x0] // load register
 400618: 4ea18400 add v0.4s, v0.4s, v1.4s // SIMD vector instruction: v0.4s + v1.4s and store in v0.4s
 40061c: 3ca06820 str q0, [x1,x0] // store q0 to an address
// loop if i < RANDNUM
 400620: 91004000 add x0, x0, #0x10 // x0 + 16 and store in x0
 400624: f13e801f cmp x0, #0xfa0 // test if x0 = 4000
 400628: 54ffff41 b.ne 400610 <main+0x90> // repeat second loop if x0 not equal 4000
 40062c: 4f000400 movi v0.4s, #0x0 // SIMD vector instruction: move immediate (vector)
 400630: aa0103e0 mov x0, x1 // move x1 to x29
 400634: d285e401 mov x1, #0x2f20 // #12064
 400638: 8b0103a1 add x1, x29, x1 // x29 + x1 and store in x1
// third loop
// sum += array3[i];
 40063c: 3cc10401 ldr q1, [x0],#16 // load register
 400640: 4ea18400 add v0.4s, v0.4s, v1.4s // SIMD vector instruction: v0.4s + v1.4s and store in v0.4s
 400644: eb01001f cmp x0, x1 // test if x0 = x1
 400648: 54ffffa1 b.ne 40063c <main+0xbc> // repeat third loop if x0 not equal x1
 40064c: 4eb1b800 addv s0, v0.4s // SIMD vector instruction: add across vector
 400650: 90000000 adrp x0, 400000 <_init-0x4d8> // store address in x0
 400654: 91210000 add x0, x0, #0x840 // x0 + 2112 and store in x0
 400658: 0e043c01 mov w1, v0.s[0] // SIMD vector instruction: move v0.s[0] to w1
 40065c: 97ffffc5 bl 400570 <printf@plt> // call printf subroutine
 400660: f9401bf7 ldr x23, [sp,#48] // load register
 400664: a94153f3 ldp x19, x20, [sp,#16] // load pair of registers
 400668: 52800000 mov w0, #0x0 // #0
 40066c: a9425bf5 ldp x21, x22, [sp,#32] // load pair of registers
 400670: d285e410 mov x16, #0x2f20 // #12064
 400674: a9407bfd ldp x29, x30, [sp] // load pair of registers
 400678: 8b3063ff add sp, sp, x16 // stack pointer + x16 and store in stack pointer
 40067c: d65f03c0 ret // return from subroutine

The disassembly output above contains 64 lines of instructions, which is less than the case with no optimization. In this case, the disassembly output contains SIMD instructions, which means that the code is vectorized. Specifically, the disassembly output shows that the second and third loop is vectorized. The second and third loop contains a few SIMD vector instructions where vector registers are used. For example, the SIMD instruction “add v0.4s, v0.4s, v1.4s” allows 4 additions to be performed in a single instruction. In terms of register “v0.4s”, “v0” represents vector register 0, “4” represents 4 data elements or lanes, and “s” represents the data element size of 32 bits. One instruction uses “v0.s[0]”, which represents a vector register element where “[0]” indicates the element index. Some SIMD instructions use the same name as other types of instructions. For example, we have “add” and “mov” instructions that become SIMD instructions when vector registers are used.

There are a few things to consider when you want to write vectorizable loops. Simple loops are more likely to be vectorizable than complex loops. A loop will not be vectorizable if it contains complex calculations such as the first loop in my program. This is also true if data dependencies exist within the loop, which is when the value of one variable depends on the value of another variable and values are overwritten. These three conditions explain why my first program that has only one big loop cannot be vectorized. Writing vectorizable code is not easy because different compilers handle vectorization differently and we are unfamiliar with that process. It will probably take at least a couple of attempts in modifying our code to get it to work. There are some general guidelines that we can follow but these guidelines may not be always helpful. On the other hand, it is not difficult to identify vectorized code that is shown in the disassembly output.


by chunsinglam at October 14, 2017 07:46 AM

October 13, 2017


Ajla Mehic

Working on my first bug

In order to start working on my bug, the first step was to set up the local environment. Since my bug is part of the Brackets text editor, I only had to install Brackets in order to begin working on it. Luckily, I did not have any problems setting it up. It was very simple and I only had to follow a few steps from Thimble’s readme:

0

After that, I was able to get the text editor running.
1

I then had to figure out why the problem was only happening in Firefox. I did some searching and found out that it’s a common problem where the CSS properties height / line-height render differently in Firefox than in Chrome. Now I had to find this property in the code and see if it really was the problem.

When I opened the code, it was confusing and overwhelming at first. It took me a while but I was able to find the CSS code for the file renaming UI:

2

Fixing the bug was the easiest part. All I had to do was add a separate line-height property for Firefox. Luckily, something similar had been done previously in the code, so I didn’t have to figure out how to do this and instead just reused the same line. This is what I added:
3

Here is what the file renaming input looked like before and after I changed it:


by amehic at October 13, 2017 10:32 PM

October 12, 2017


Eric Schvartzman

Contributing To Open Source Software - Thimble

Experience So Far Contributing To Thimble

On October 3, 2017 I wrote about my experiences getting involved in Mozilla's Open Source project Thimble. The entire process was a very positive one and I am glad I took the steps forward to make it happen. The community on Thimble is very welcoming of new contributors regardless of experience, and considering the large scope of the project the community could use as much help as possible. The maintainers of Thimble are amazing to work with, especially for individuals new to open source. On the thimble website it states over half the contributors identify themselves as students, and in order to help beginners find their first bugs the Thimble community will label easy bugs as "good first bug". The community is filled with contributors who are very knowledgable about how the project should work and they always encourage participants to ask questions if they get stuck. At first I was hesitant to ask small questions but I was reassured that this would not be a problem after seeing the way people respond positively to the comments on Github.

Progress On My First Bug

After about 2 weeks of making Pull requests and responding to comments about the changes I made, I was finally able to get my pull request merged. The bug involved changing the UI for an X button that would close the inline editor in the brackets text editor, which is embedded within Thimble. The pull request took two weeks to merge due to the nature of designing user interfaces, as well as unforeseen design conflicts caused by the changes I made. Slowly but surely the issues were solved and eventually I was able to produce a workable solution for the bug with help of Luke and David. These two individuals helped me out a lot to understand how to go about making the proper changes to the code that was causing the bug. I was even able to add in my own design idea for a specific aspect of the button. Since Thimble allows users to change the theme of the text editor from dark to light there had to be a modification to the X button so that it would be visible in the light theme. I was able to add a small minor touch to the background colour of the button so that would display nicely in the light theme. If you are interested in seeing the process of how I started solving the bug to implementing a final solution, you can see it here.

Final Thoughts

Now that I officially have one contribution under my belt i'm looking forward to working on the next bug in Thimble. It's a great feeling to know you participated in a software project regardless of the size of your contribution. People will be using the software that you had a part in building, and on top of that you get to network with the programmers working on the same project as you. Software isn't only about writing code, it's about working together collaboratively with other people. That aspect of programming is the one I appreciate the most.

by Eric S (noreply@blogger.com) at October 12, 2017 03:21 PM


Mat Babol

Starting work on my first bug

This week I finally started working on one of my bugs. I decided to first start on the DevTools bug, since I got this one assigned to me first. Once I cloned the repo to my local machine, there were a few packages that I had to install first, mainly node and yarn. The README file was very easy to follow. A few weeks ago, I ran Firefox from source so I knew how to set it up. You can read up on my experience with that here. I had to download and build the source once again, fortunately this time I knew how to do this. So I had the correct version of Firefox running, now it's time to look into the bug!

The bug instructions are easy to understand, with easy to follow steps for reproduction. There is also a handy video for reference. Once I saw the "Learn More" button, I understood the problem. When the user hovers over the button, the entire row acts as a button, instead of just the image.


Definitely not a software breaking bug. Nonetheless, I'm excited to start working on this. After opening up the source code, I was amazed at the amount of files! I did not know where to begin. How do I find my small bug inside of all of this?! There was hundreds of JavaScript files inside of NetTools.

After looking through the files for some time, I found a file called timings-panel.js, I think this is it. After playing around with some of the code, I think I found the culprit.


The inside of the CSS class panel-container contains nothing about width, only height. So I try adding a width declarations to see what happens, and it worked! Eh, sort of. The width is changed for all the other panels, not what we want.



We're almost there though. Instead of adding a width declaration to the panel-container class, I decided to add another class to my div tag. I found an unused class called timings-label, that contains a width property only. The name itself suggests that this is what I need. Once I built and ran the project again, I found the bug was fixed. Success! My first bug is complete.


Bug is now complete, now I just need to submit it. That's also something I've never done before. I've done enough work for the day, that will have to wait till next blog. Stay posted while I complete my first Pull Request.

by Mat Babol (noreply@blogger.com) at October 12, 2017 04:42 AM


Matthew Marangoni

Algorithm Selection Testing

Introduction


In this demonstration, we'll be benchmarking various versions of a program which populates a very large array (500,000,000 values) with simulated, signed 16-bit interger audio signal samples. Those samples will then be scaled by a volume factor ranging from 0.0000 to 1.0000. The ideal solution will use minimal processing to scale the sound samples, in order to save battery life on a mobile device (and simply be as efficient as possible).


Benchmarking


I've gone ahead and made 3 versions of this program, we'll compile each one with both no-optimization and O3 optimization, then benchmark each using the time command as well as a timer in the program. The timer in the programs track the time elapsed to complete the loop that gets our scaled volume values using varying methods.

Version 1: This version multiplies each sample by the floating point volume factor (0.75)


Version 1 Results:

















Version 2: This version pre-calculates a lookup table of all possible sample values multiplied by the volume factor, and looks up each sample in the table to get the scaled values.


Version 2 Results:

















Version 3: This version converts our volume factor to a fix-point integer by multiplying by a binary number representing a fixed-point value "1" (we'll use 256 as our multiplier). We then shift the result to the right 8 bits.


Version 3 Results:

















Analysis

A few things we can immediately tell from the tests above:
  • In all versions of our program, the O3 optimized versions significantly outperformed the non-optimized counterparts
  • If we use /usr/bin/time instead of just time, we can see the max memory footprint of each process, seen below

























  • The performance for each approach can be measured by observing the times we measured in each program. Version 3 is the fastest (1700ms/136ms), followed by Version 1 (1883ms/378ms), then Version 2 (2044ms/388ms)

by Matthew Marangoni (noreply@blogger.com) at October 12, 2017 03:06 AM


Diana Dinis-Alves

Fixing Bugs

Follow up to this post.

Not much to say. Installed Thimble with little to no issues. I did have to split the installation process into a two day endeavor due to some Wi-Fi issues though. I’ve started work on the bug, but I’m not entirely sure how to check if the changes I’ve made have produced the results I want.

Learned a bit more about git and its functionality as well this week.
Update:

I’ve successfully fixed the bug I took on, insured that it worked as desired and all that’s left is to make a Pull Request on the Brackets repo. Hurray! 


by ddinisalves at October 12, 2017 01:38 AM


Steven De Filippis

That pesky pdf.js JURL/URL Safari bug

So as i’ve started to dig into the searchParams() bug within pdf.js I seem to have tracked down the underlying issue.

First, the browser redirect seems to be related with what I had referenced in the initial blog post. I noticed the replace() function for the url parameter was throwing undefined errors. I corrected this by casting the url parameter to a String. This resolved the redirect to a blank page with the regular expression URL. However, upon resolving this issue, the main underlying issue seems to appear:

After much debugging, it seems the main issue is the re-definition of the URL object on a global scope.

We can see that by doing this, any future instances of URL created, shall be created under the context of JURL. As a result of this, when searchParams() is called, the function is not defined within JURL and therefore is unable to resolve the method to parse the passed parameters. It is interesting that Chrome and other browsers are able to catch this error and to fall back to the original URL object to find the original method.

I have noticed the issue is immediately resolved when removing the global scope re-definition of URL.

Now that I have identified the underlying cause, I am currently evaluating the best possible way to resolve this without breaking anything else in the making. There must be a reason for the redefinition of URL being JURL on a global scope. So the intended bug fix should likely keep this rule in effect.

On that note, I have noticed a bunch of static methods that are proxied back to URL from JURL which can be found here.

I am thinking of using this section for inserting a proxy for searchParams->get() for my final fix.

One thing I did learn about that I found crucial for debugging this was disabling the JavaScript Console from clearing upon redirect. I was able to turn this off in Safari’s Developer Settings:

Since the browser got redirected with the regular expression bug, the JavaScript console was always clearing the contents before I had a chance to see my debug messages. By removing the clear setting in Safari, it allowed me to see my debug statements via console.log and track down the issues quickly:


Tune in next week for the final solution! 🙂

by Steven at October 12, 2017 01:30 AM


Jiel Selmani

Landed Two PR's, And Made A New Friend - Part 2

Friends Hanging Out

In Part Two, we can dive in to the PR that I spent more time working on.  Again, although it was easy, I learned a lot about the entire debugger.html project and did some research about React in the process, which I'm really interested in learning more about. Just so we're on the same page, you can see the link below:

https://github.com/devtools-html/debugger.html/pull/4230

In this bug, I was asked to move all of the workers to their own directory.  It sounds really simple but it does require a lot of testing in order to get it landed.  Thankfully, I had help learning how testing works from Senior Software Engineer for the Mozilla DevTools team, Jason Laster.

Before I discuss what I fixed, I have to give kudos to this guy.  When I joined the Slack channel, Jason was the first person to greet me to the group and also assigned me my issue, which you can find below:

https://github.com/devtools-html/debugger.html/issues/3725

In all honesty, I ask a lot of questions, simply because I want to know how everything works.  Not ONCE did Jason seem impatient in answering them for me, giving me more motivation to keep working and trying to add value to the team.  One late night, I was getting closer and closer to figuring out how to get everything working and having the tests pass, but due to my Windows build (smh) my line endings didn't match the snapshots that were created by other contributors (damn you CRLF). 

Jason gladly jumped into a call with me at approximately 1:00AM to help me try and figure it out and by the end of it, him and I were chatting about the industry, his career, and my aspirations.  It was late so I didn't finish that night, but I was closer because of our call.  He gave me many pointers on how to get through issues faster and how to create test cases that give insight on how changes will affect the overall build.

This guy is super welcoming and I would gladly recommend anyone who wants to work on debugger.html to give it a try.  You'd be working with really bright minds and I would be more than happy to keep working with Jason and the team for many years to come.  So, if you're reading this and are Jason's manager, give this guy a pat on the back because he's the reason why I want to keep working on this project instead of jumping elsewhere.

Now, back to the bug.  In order to get started, I had to get the environment setup.  I had to install Yarn, which is a package manager that is used for this project.  You can find the link to learn more about Yarn below:

https://yarnpkg.com/en/

Basically, it caches every package that is downloaded so it doesn't need to download it again and updates are super fast.  I also had to download a newer version of Node in order to get up and running.  Once I forked the project and cloned it to my machine, I then needed to input
yarn install
to get all of the packages installed and updated to the current needs of the project.  Now is where the fun starts.  In total, I had to change 82 files and I didn't do it the way you would expect.  Because I was so eager to work on it and get it finished, I just went on a changing spree and that's where my problems started.  When I ran
yarn test
after making changes, I fell into problems where many of the tests were failing.  Super overwhelming and I would never recommend doing any project this way.  Jason also reaffirmed it wasn't the best way to go.  However, with my persistence I was able to keep at it and eventually only had a few tests failing which is where Jason came in to help.  

What I really learned about this project is what it takes to get a product to ship.  With such a huge project, testing is automated but plenty and that's what makes these projects great.  You really see how it all fits together and its amazing how so many people can work collaboratively in order to accomplish a task.  Since the project was mostly changing paths and directory structure, it wasn't very "mind intensive" but challenging enough because I've never done it before.  After tinkering with it for a while, I finally saw this beauty in my command line:


I made my first PR on GitHub to a major project and it didn't pass CI or Travis CI testing.  I realized after seeing how the tests failed that I also needed to run
yarn run lint
and
yarn run flow
to check for other discrepancies that I missed.  After fixing those and pushing back to GitHub, another snapshot failed...stupid line endings.  Jason helped push me along the finish line and landed me my first merge to a major project.  To add to his welcoming nature, he also called out my contribution in Slack:


Overall, my experience with this process was really pleasant and I'm happy to have joined in on this project.  Moving forward, I'm ready to continue taking on bugs/features for the DevTools team.  I asked Jason what advice he would give to new contributors, and his answer was simple.  

"Get the project installed and have fun breaking it."

Thanks Jason, and let's grab that cold one soon.

by Jiel Selmani (noreply@blogger.com) at October 12, 2017 12:59 AM


Kelvin Cho

Lab 5 – Vectorization Lab

For this lab we are to write a short program using two 1000-element integer arrays and fill them with random number from -1000 to +1000. Then we are to sum the values of two array into a third array. Finally using the third array we print the total of all the numbers.

For this particular code we are to use the aarchie server which is an aarch64 base system and also compiling the code with auto-vectorized.

The source code of this program:

pastiebin.com/embed/59dd71836c565

b6855b15d9

This is the object dump file:

http://pastiebin.com/embed/59dd6be672464

f1bae6360d

The compiler command that was use to build this is:

gcc -O3 -ftree-vectorize -o lab lab.c

The link below is an object dump using the complier option -O0. So basically without any optimization through SIMD.

http://pastiebin.com/embed/59dd78bb5b449

For the code to be vectorized you have to use the game -ftree-vectorize or -O3 which turns on auto-vectorization in most modern gcc.

As seen above we do not see any vector that was newly added, the only changes that happened were optimizations. So to make vector change happen we will change a bit of our source code.

https://pastebin.com/embed_iframe/BQaK0Pz1

5041cdfcf2

The changes that we made were just moving the other additions into a different for loop.

27f690af48

bc1615b03a

As you can see the code has grow a far bit compare to before. Most code that was added are the same as the original one, the added new lines are for the loops.

The resulting in changing our code we can see that the vector is added.

add v0.4s, v0.4s, v1.4s

movi v0.4s, #0x0

addv s0, v0.4s

mov w1, v0.s[0]

Reflection:

My experience with this lab was that the Auto-Vectorization does not make any changes but optimzation. I am assuming the reason is that, the compiler thinks that it was not need to be put into vector because it would be slower. And that it worked in the second version because using three loops is slower than using vectors.


by klvincho at October 12, 2017 12:56 AM


Jiel Selmani

Landed Two PR's, And Made A New Friend - Part 1

I Love Firefox DevTools
Should I Say More?
A couple weeks ago, I did the unthinkable...Ok, not really.

As a new contributor to Open Source projects,  I had the opportunity to dive in and look for projects to work on.  If you take a look at my last post, you'll notice that it isn't easy to find a welcoming community for newcomers due to the sheer fact that everyone working on a project is super busy!  However, when you do find a community that wants to see you do well and succeed, it is a blessing.

First and foremost, let's take a look at my two PR's.  A link to both is below (both are in DevTools):

https://github.com/devtools-html/debugger.html/pull/4230
https://bugzilla.mozilla.org/show_bug.cgi?id=1402387  

Now, I'm not going to say that I learned so much that I am now the greatest developer in the world (by all means I have a lot to learn), but this process was very inviting and I am intrigued to do more and more work for the team.  I'm all about adding value and putting myself in challenging situations.  Of the two PR's, I was most interested in the first because I had the opportunity to work with more code and see more of the project, but I'll get to that.

For the bug I found through Bugs Ahoy!, this was a relatively easy fix. I had to simply camel case files and ensure that I didn't break anything.  It wasn't very hard, but I learned something new from it.  Having to work with the Firefox Nightly build, I had to utilize a lot of the same commands in order to run and build my changes.  At first, the changes I made would not be reflected because I was also changing the
moz.build
file, which changed the overall source.  After scratching my head a few times wondering why my
mach build
and
mach run
commands weren't working properly when I would test.  I took to the Slack channel to ask a few questions, and was taught immediately about how to solve such a simple issue.  Before building, you need to clobber your files and then build.  Essentially, what was happening was that the entire build directory, including the checked out source and object directory, was deleted and therefore would not work properly.  So, I entered
mach clobber
and when that finished, I then entered
mach build
only to sit through an entire 40 minutes of having the project build again.

That was my fault.  When I built the Nightly build the first time, I didn't build it in Artifact Mode - meaning that it had to recompile and build all C++ components locally.  However, there is a solution and it is easy to make the change.  Here's the link:

https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions/Artifact_builds



Basically, by changing to Artifact Mode, my machine

After everything worked out, I had to create a
.patch
file for testing upstream through continuous integration (CI).  It's pretty close to how you would commit projects with Git, except we're using Mercurial instead.  You can find out more about how to create patches in the following link:

http://docs.firefox-dev.tools/contributing/making-prs.html

It's a great document and I would bookmark it if you're new to contributing like me.  My reviewer, Patrick Brosset, made a small change to reflect how the class was named, and after that everything passed nice and clean.  The patch was landed shortly after that.

Overall, this process was straight-forward and Patrick was also really friendly throughout the whole thing.  I asked him a few questions via the Slack channel and he gave me some pointers about how to traverse through my interests and where to start, which I'm grateful for.

Now, for Part 2...which you can find out about in my next post :)

Check it out below:

http://mylyfeincode.blogspot.ca/2017/10/landed-two-prs-and-made-new-friend-part_11.html

by Jiel Selmani (noreply@blogger.com) at October 12, 2017 12:50 AM


Joshua Longhi

Working on bugs

The bug I was recently working on this week was a bug in the fire fox developer tools. This bug was an update to the developer tools to call a new function to collapse nodes when viewing HTML. The change was to make the collapse function call the collapse all function to recursively collapse all the child nodes of the node you are collapsing. The community so far has been very friendly, helpful and patient as I was making obvious mistakes such as not updating comments. They stayed kind and helped me through every step including testing and uploading.

As I worked on the bug I learned a lot about mercurial, the version control used in the fire fox project. While mercurial is similar to git I am not an expert in either and it has been a learning experience. I was having trouble pulling new versions of fire fox and ended up pulling prematurely and rebuilding my entire project which takes hours on the virtual machine I use.

Mozilla does very interesting things when managing a multi-language large scale project like Mozilla. Since i had to update the string of an item on the menu that would not be translated to other languages. To solve this Mozilla has any changes done to strings be accompanied by a variable name change as well. The reason they do this is so when developers working on the other languages build their project it will fail as their variable does not exist any more. This signals the developers to look at the new variable and apply a translation to ensure the strings are updated across all languages. They also have an online code tester that tests changes done to the project as well as code deputy’s who are responsible for landing code changes to ensure nothing breaks.


by jlonghiblog at October 12, 2017 12:12 AM

October 11, 2017


Sofia Ngo-Trong

Overriding library functions, and the multiarch system

Overriding libraries

As we did previously, a library function can have different implementations, each optimized for a specific computer architecture. So how can we use a specific implementation of a function in our program?

Usually when we compile a program, there is an intermediate stage where we “link” the different object files (extension .o) for our project into an executable output file. We can utilize object files provided by different libraries as they may contain readily available functions that we want to use in our program. In the C library, library source files are either built as archive files (with an .a extension) which are statically linked into executables, or as “shared object files” (ending in .so) which are dynamically linked into our executables.

With the gcc compiler, you would use -L for the path of the library file (which is not in the standard library path), and -l to link in a library. For example:

gcc  -o  hello  hello.c  -L/path/to/lib -llibrary

would link the library called “library.so” from the directory /path/to/lib into my program.

If you create your own shared object files and don’t install them in your /usr/lib, then you will need to set the LD_LIBRARY_PATH environment variable so that the linker can find your file at runtime.

MultiArch

MultiArch (Multi-Architecture) is the “capability of an Operating System to install and run applications of multiple different binary targets on the same system“. It is the “general mechanism for installing libraries of more than one architecture on a system“. This makes software packages more portable and simplifies the build process. For example, this allows 32-bit applications to run on 64-bit architectures.

With the gcc compiler, the switch –enable-multiarch enables the MultiArch mechanism.

 


by sofiangotrong at October 11, 2017 07:51 PM


Shaun Richardson

Lab 3 – Part II

After a pretty lengthy break from thinking about Assembly, here I am back at it! When I last left the lab I was getting a pretty interesting error when trying to print out a loop from 0 to 30.

After reviewing the previous code and the error I was getting, I made a few alterations to the code. After trying for about an hour to correct my error, I decided just to rewrite the entire code by hand. I was getting strange errors about files not ending in end of line (/n) and quotes not being closed….so going off any normal computer related issue, I restarted it! I made some cosmetic changes to the lab code (just more visually readable for myself) while also rewriting everything. I got it to work!

Lab3_2-1

Here is the changed code:

Lab3-xerx

Next step is to do this all over again on Aarchie! Luckily I can use the exact same logic and just transfer it over and change the syntax right? First attempt resulted with this:

Lab3_2-2

Its right! Oh wait no its not, why do we have those weird characters for loop 20, 29 and 30? Looking back at my code I did some weird stuff with registers which I then fixed and here is the result:

Lab3_2-3

With the code:

Lab3-aarch

Thoughts

So after this frustrating lab I am finally done. I will admit, I ended up looking up A LOT of stuff, talked to other students, and watched some tutorials online to be able to get this done. Coding in Assembly is a challenge that I just wrote off as we were only doing a simple looping program, however dealing with all the registers and keeping track of where stuff is or what register gets used when dividing or finding the remainder is very challenging.

On the topic of x86 vs Aarch64, I definitely enjoyed the syntax of x86 a lot more. It makes a lot more sense in my mind (example, mov %r15,%r10, reading that left to right is moving the contents of r15 into r10, it makes sense!). While Aarch64 I struggled a lot and messed up a lot of syntax because its an odd way to write things. Especially dealing with the division. x86 it was a simple, set up the things to be divided while initializing the remainder and you got your values. In Aarch64 it was seemingly a lot more complicated to do.


by srichardson6blog at October 11, 2017 06:31 AM


Azusa Shimazaki

Vectorization Lab

For this lab, I needed to examine SIMD (single instruction multiple data) vectorization with the GCC compiler on Arch64.

1. Make a code
The first instruction was....
"1, Write a short program that creates two 1000-element integer arrays and fills them with random numbers in the range -1000 to +1000, then sums those two arrays element-by-element to a third array, and finally sums the third array and prints the result."
https://wiki.cdot.senecacollege.ca/wiki/SPO600_Vectorization_Lab

To meet that requirements, I made a c code file named "sum.c"


#include<stdio.h>

#include<stdlib.h>


int main() {


int first[1000];

int second[1000];

int third[1000];

int total =0;

int i;

//fill with random numbers -1000 to 1000

for (i = 0; i < 1000; i++) {

first[i] = rand() % 2000 + 1 -1000;

second[i] = rand() % 2000 + 1 -1000;

}


//sum two arrays

for (i = 0; i < 1000; i++) {

third[i] = first[i] + second[i];

}


//sum numbers in the third array

for (i = 0; i < 1000; i++) {

total += third[i];

}


printf("total= %d \n" , total);


}
The output:     total= -16332

2. Compile the program
I compiled the code on Arch64 with the two different way (no optimized free vectorize ) and made two files to compare the result.
$ gcc sum.c -g -O0 -o sum.out
$ gcc sum.c -g -O3 -ftree-vectorize -o sumV.out
3. Dump the file I used --source option instead of -d to see the detail.
$ objdump --source  sum.out
$ objdump --source  sumV.out 
4. Result

The output was following. I put color on the texts that has the same stream.

 //sum.out (not optimization)

00000000004005f4 <main>: 
#include<stdio.h>
#include<stdlib.h>

int  main() {
  4005f4:       d285e010        mov     x16, #0x2f00                    // #12032 <- loadx16 with 12032
  4005f8:       cb3063ff        sub     sp, sp, x16                             <-stack pointer - x16
  4005fc:       a9007bfd        stp     x29, x30, [sp]                        <- store into a stack
  400600:       910003fd        mov     x29, sp                                  <-move sp to x29(frame pointer which stores an address)

     int first[1000];
     int second[1000];
     int third[1000];
     int total =0;
  400604:       b92effbf        str     wzr, [x29,#12028]      store wzr to pointed by [x29,12028](register rto memory)
     int i;
     //fill with random numbers -1000 to 1000
     for (i = 0; i < 1000; i++) {
  400608:       b92efbbf        str     wzr, [x29,#12024]     store wzr to pointed by [x29,12028](register to memory)
  40060c:       14000027        b       4006a8 <main+0xb4>      <-main +180     branch?

             first[i] = rand() % 2000 + 1 -1000;
  400610:       97ffffa8        bl      4004b0 <rand@plt>                      <-  call subroutine
  400614:       2a0003e1        mov     w1, w0                                    <- move  w0 data to w1
  400618:       5289ba60        mov     w0, #0x4dd3                     // #19923  <-load w0 with 19923
  40061c:       72a20c40        movk    w0, #0x1062, lsl #16                   <-load w0 with 4192
  400620:       9b207c20        smull   x0, w1, w0                                    <-load w0 with w1*w0
  400624:       d360fc00        lsr     x0, x0, #32
  400628:       13077c02        asr     w2, w0, #7
  40062c:       131f7c20        asr     w0, w1, #31
  400630:       4b000040        sub     w0, w2, w0
  400634:       5280fa02        mov     w2, #0x7d0                      // #2000    <-load w2 with 2000
  400638:       1b027c00        mul     w0, w0, w2                                          <-load w0 with w0*w2
  40063c:       4b000020        sub     w0, w1, w0                                             <-load w0 with w1-w0
  400640:       510f9c02        sub     w2, w0, #0x3e7                 <-999         <-load w2 with w0-999
  400644:       b9aefba0        ldrsw   x0, [x29,#12024]
  400648:       d37ef400        lsl     x0, x0, #2
  40064c:       914007a1        add     x1, x29, #0x1, lsl #12           <- load x1 with x29+1
  400650:       913d6021        add     x1, x1, #0xf58                 <- 
load x1 with x1+3928
  400654:       b8206822        str     w2, [x1,x0]                       <-store register w2  pointed to by [x1.x0]

             second[i] = rand() % 2000 + 1 -1000;
  400658:       97ffff96        bl      4004b0 <rand@plt>            <- same process with first loop
  40065c:       2a0003e1        mov     w1, w0                     
  400660:       5289ba60        mov     w0, #0x4dd3                     // #19923 
  400664:       72a20c40        movk    w0, #0x1062, lsl #16
  400668:       9b207c20        smull   x0, w1, w0
  40066c:       d360fc00        lsr     x0, x0, #32
  400670:       13077c02        asr     w2, w0, #7
  400674:       131f7c20        asr     w0, w1, #31
  400678:       4b000040        sub     w0, w2, w0
  40067c:       5280fa02        mov     w2, #0x7d0                      // #2000
  400680:       1b027c00        mul     w0, w0, w2
  400684:       4b000020        sub     w0, w1, w0
  400688:       510f9c02        sub     w2, w0, #0x3e7
  40068c:       b9aefba0        ldrsw   x0, [x29,#12024]
  400690:       d37ef400        lsl     x0, x0, #2
  400694:       913ee3a1        add     x1, x29, #0xfb8             

  400698:       b8206822        str     w2, [x1,x0]

     for (i = 0; i < 1000; i++) {
  40069c:       b96efba0        ldr     w0, [x29,#12024]         <-load register w0 from the address pointed to by [x29,#12024]
  4006a0:       11000400        add     w0, w0, #0x1            <- add 1 to w0 
  4006a4:       b92efba0        str     w0, [x29,#12024]          
store w0 to pointed by [x29,12024](register to memory)
  4006a8:       b96efba0        ldr     w0, [x29,#12024]         <-memory to register
  4006ac:       710f9c1f        cmp     w0, #0x3e7                  <- if w0< 999 continue
  4006b0:       54fffb0d        b.le    400610 <main+0x1c>          <- loop back

     }

     //sum two arrays
     for (i = 0; i < 1000; i++) {
  4006b4:       b92efbbf        str     wzr, [x29,#12024]    store wzr to pointed by [x29,12024](register to memory) store wzr to pointed by [x29,12024](register to memory)
  4006b8:       14000012        b       400700 <main+0x10c>

             third[i] = first[i] + second[i];
  4006bc:       b9aefba0        ldrsw   x0, [x29,#12024]
  4006c0:       d37ef400        lsl     x0, x0, #2
  4006c4:       914007a1        add     x1, x29, #0x1, lsl #12    
<-load x1with w29+1
  4006c8:       913d6021        add     x1, x1, #0xf58             <-load x1with x1+ 3928
  4006cc:       b8606821        ldr     w1, [x1,x0]                 <-load register w1 from the address pointed
  4006d0:       b9aefba0        ldrsw   x0, [x29,#12024]
  4006d4:       d37ef400        lsl     x0, x0, #2
  4006d8:       913ee3a2        add     x2, x29, #0xfb8     <
<-load x1with x1+ -add 4024
  4006dc:       b8606840        ldr     w0, [x2,x0]              <-load register w0 from the address pointed
  4006e0:       0b000022        add     w2, w1, w0
  4006e4:       b9aefba0        ldrsw   x0, [x29,#12024]
  4006e8:       d37ef400        lsl     x0, x0, #2
  4006ec:       910063a1        add     x1, x29, #0x18       
<-add 24
  4006f0:       b8206822        str     w2, [x1,x0]        store w2 to pointed by [x1,x0](register to memory)
     for (i = 0; i < 1000; i++) {
  4006f4:       b96efba0        ldr     w0, [x29,#12024]        
  4006f8:       11000400        add     w0, w0, #0x1               
  4006fc:       b92efba0        str     w0, [x29,#12024]     
  400700:       b96efba0        ldr     w0, [x29,#12024]     
  400704:       710f9c1f        cmp     w0, #0x3e7                
  400708:       54fffdad        b.le    4006bc <main+0xc8>  

     }

     //sum numbers in the third array
     for (i = 0; i < 1000; i++) {
  40070c:       b92efbbf        str     wzr, [x29,#12024]          <- register to memory
  400710:       1400000b        b       40073c <main+0x148>

             total += third[i];
  400714:       b9aefba0        ldrsw   x0, [x29,#12024]
  400718:       d37ef400        lsl     x0, x0, #2
  40071c:       910063a1        add     x1, x29, #0x18            
<-add 24
  400720:       b8606820        ldr     w0, [x1,x0]           <- load register w0 from the address pointed to by [x1, x0]
  400724:       b96effa1        ldr     w1, [x29,#12028]    <- load register w1 from the address pointed to by [x29,#12028]
  400728:       0b000020        add     w0, w1, w0
  40072c:       b92effa0        str     w0, [x29,#12028]   store w0 to pointed by [x29,12028](register to memory)
     for (i = 0; i < 1000; i++) {
  400730:       b96efba0        ldr     w0, [x29,#12024]  
  400734:       11000400        add     w0, w0, #0x1               
  400738:       b92efba0        str     w0, [x29,#12024]      
  40073c:       b96efba0        ldr     w0, [x29,#12024]        
  400740:       710f9c1f        cmp     w0, #0x3e7                  
  400744:       54fffe8d        b.le    400714 <main+0x120>   

     }

     printf("total= %d \n" , total);
  400748:       90000000        adrp    x0, 400000 <_init-0x468>
  40074c:       9120a000        add     x0, x0, #0x828                      <- add2088
  400750:       b96effa1        ldr     w1, [x29,#12028]                <--memory to register (load)
  400754:       97ffff63        bl      4004e0 <printf@plt>
  400758:       52800000        mov     w0, #0x0                        // #0

}
  40075c:       a9407bfd        ldp     x29, x30, [sp]
  400760:       d285e010        mov     x16, #0x2f00                    // #12032
  400764:       8b3063ff        add     sp, sp, x16
  400768:       d65f03c0        ret <- return
  40076c:       00000000        .inst   0x00000000 ; undefined





###############################################################################
//sumV.out (vectorized)

00000000004004f0 <main>:
#include<stdio.h>
#include<stdlib.h>

int  main() {      ***first main***
  4004f0:       d285e410        mov     x16, #0x2f20                    // #12064
  4004f4:       cb3063ff        sub     sp, sp, x16
  4004f8:       a9007bfd        stp     x29, x30, [sp]
  4004fc:       910003fd        mov     x29, sp                                <-move sp to x29
  400500:       a90153f3        stp     x19, x20, [sp,#16]              <-16bit*8 ??
     int third[1000];
     int total =0;
     int i;
     //fill with random numbers -1000 to 1000
     for (i = 0; i < 1000; i++) {
             first[i] = rand() % 2000 + 1 -1000;
  400504:       5289ba74        mov     w20, #0x4dd3                    // #19923

int  main() {    ***second main***
  400508:       a9025bf5        stp     x21, x22, [sp,#32]                <-32bit*4   store x22 into stack from sp 32
  40050c:       910103b6        add     x22, x29, #0x40             <-  x29+  64
  400510:       913f83b5        add     x21, x29, #0xfe0            <-  x29+ 4064
  400514:       f9001bf7        str     x23, [sp,#48]                     <- register to memory       48??
             first[i] = rand() % 2000 + 1 -1000;
  400518:       72a20c54        movk    w20, #0x1062, lsl #16     <-4194   movk ...Move 16-bit immediate into register and keep other bits unchanged
  40051c:       5280fa13        mov     w19, #0x7d0                     // #2000   load w19 with 2000          <-rand max
int  main() {    ***third main***
  400520:       d2800017        mov     x23, #0x0                       // #0
             first[i] = rand() % 2000 + 1 -1000;
  400524:       97ffffe3        bl      4004b0 <rand@plt>
  400528:       9b347c01        smull   x1, w0, w20       <-Signed Multiply Long multiplies two 32-bit register values
  40052c:       9367fc21        asr     x1, x1, #39                        <asr - Arithmetic shift right  39bits shift?
  400530:       4b807c21        sub     w1, w1, w0, asr #31
  400534:       1b138020        msub    w0, w1, w19, w0             <- Multiply-subtract
  400538:       510f9c00        sub     w0, w0, #0x3e7           <- subtract decimal 999  
  40053c:       b8376ac0        str     w0, [x22,x23]                     <- register to memory

             second[i] = rand() % 2000 + 1 -1000;
  400540:       97ffffdc        bl      4004b0 <rand@plt>  <-same as first loop
  400544:       9b347c01        smull   x1, w0, w20
  400548:       9367fc21        asr     x1, x1, #39
  40054c:       4b807c21        sub     w1, w1, w0, asr #31
  400550:       1b138020        msub    w0, w1, w19, w0
  400554:       510f9c00        sub     w0, w0, #0x3e7          <- decimal 999
  400558:       b8376aa0        str     w0, [x21,x23]                   <- register to memory

  40055c:       910012f7        add     x23, x23, #0x4          <- decimal 4
     for (i = 0; i < 1000; i++) {
  400560:       f13e82ff        cmp     x23, #0xfa0                        <- if x23< 4000 repeat
  400564:       54fffe01        b.ne    400524 <main+0x34>      <- loop back
  400568:       d283f002        mov     x2, #0x1f80                     // #8064
  40056c:       8b0203a1        add     x1, x29, x2
  400570:       d2800000        mov     x0, #0x0                        // #0
     }

     //sum two arrays
     for (i = 0; i < 1000; i++) {
             third[i] = first[i] + second[i];
  400574:       3ce06ac0        ldr     q0, [x22,x0]                   <--memory to register (load)
  400578:       3ce06aa1        ldr     q1, [x21,x0]                   <--memory to register (load)
  40057c:       4ea18400        add     v0.4s, v0.4s, v1.4s
  400580:       3ca06820        str     q0, [x1,x0]                     <- register to memory
  400584:       91004000        add     x0, x0, #0x10           <- decimal 16
  400588:       f13e801f        cmp     x0, #0xfa0                      <- if x23< 4000 repeat
  40058c:       54ffff41        b.ne    400574 <main+0x84>      <- loop back

  400590:       4f000400        movi    v0.4s, #0x0
  400594:       aa0103e0        mov     x0, x1
  400598:       d285e401        mov     x1, #0x2f20                     // #12064
  40059c:       8b0103a1        add     x1, x29, x1
     }

     //sum numbers in the third array
     for (i = 0; i < 1000; i++) {
             total += third[i];
  4005a0:       3cc10401        ldr     q1, [x0],#16                  <--memory to register (load)
  4005a4:       4ea18400        add     v0.4s, v0.4s, v1.4s
  4005a8:       eb01001f        cmp     x0, x1                 <- if x0< x1 repeat
  4005ac:       54ffffa1        b.ne    4005a0 <main+0xb0>      <- loop back  main+176

     }

     printf("total= %d \n" , total);
  4005b0:       4eb1b800        addv    s0, v0.4s                               <- addv = add across vector
  4005b4:       90000000        adrp    x0, 400000 <_init-0x468>      <-1128  permitting the address calculation at a 4KB aligned memory region
  4005b8:       911ea000        add     x0, x0, #0x7a8                           <-1960
  4005bc:       0e043c01        mov     w1, v0.s[0]                            put v0.s[0] into w1
  4005c0:       97ffffc8        bl      4004e0 <printf@plt>

}
  4005c4:       f9401bf7        ldr     x23, [sp,#48]                            <--memory to register (load)
  4005c8:       a94153f3        ldp     x19, x20, [sp,#16]
  4005cc:       52800000        mov     w0, #0x0                        // #0
  4005d0:       a9425bf5        ldp     x21, x22, [sp,#32]
  4005d4:       d285e410        mov     x16, #0x2f20                    // #12064   loadx16 with 12064 i
  4005d8:       a9407bfd        ldp     x29, x30, [sp]
  4005dc:       8b3063ff        add     sp, sp, x16
  4005e0:       d65f03c0        ret                                                          <-return
  4005e4:       00000000        .inst   0x00000000 ; undefined


#################################################################################

The two of them are totally different.
It seems the first normal one following the order of the original code. There are many repeat part. As a result, the code is quite long (127lines).

While the vectorized one is shorter(94lines) and looks simpler.
In the emitted code, the three of <main> parts appeared but the inside of the block of each was different. They all have the " first[i] = rand() % 2000 + 1 -1000;" line. It means the first loop part are separated??
Remarkably, vectorized one has this line. It is not on the other one.
 400500:       a90153f3        stp     x19, x20, [sp,#16]              <-16bit*8 ?
and the differences start here.

There is the another line. the loop counter is 4000 on the cmp lines somehow.
Because of 32bit*4 ??
400588:       f13e801f        cmp     x0, #0xfa0                      <- if x23< 4000 repeat

There are some obviously "vectorization" sign, letter v.

  4005a4:       4ea18400        add     v0.4s, v0.4s, v1.4s

  4005b0:       4eb1b800        addv    s0, v0.4s                               <- addv = add across vector

the syntax of "addv".
s... destination
0...number of the SIMD and FP destination
v0...name of the SIMD and FP source register.
4s...an arrangement specifier.


It seems GCC -O3 -ftree-vectorize option working well enough to change the composition of the code.

5. Extra
Since I saw  "__aligned__" in a class recture,
I also made a code "sum2.c" by using it.
#include<stdio.h>
#include<stdlib.h>

int main() {

typedef int aint __attribute__((__aligned__(4)));
aint first[1000];
aint second[1000];
aint third[1000];
int total =0;
int i;
//fill with random numbers -1000 to 1000
for (i = 0; i < 1000; i++) {
first[i] = rand() % 2000 + 1 -1000;
second[i] = rand() % 2000 + 1 -1000;
}

//sum two arrays
for (i = 0; i < 1000; i++) {
third[i] = first[i] + second[i];
}

//sum numbers in the third array
for (i = 0; i < 1000; i++) {
total += third[i];
}

printf("total= %d \n" , total);

}
 and compiled and two ways and dumped like first one.

$  gcc sum2.c -g -O0  -o sum2.out
$  gcc sum2.c -g -O3 -ftree-vectorize  -o sumV2.out
$  objdump --source  sum2.out
$  objdump --source  sumV2.out
I was expecting the something different caused by using "aligned"
However, "main"'s transactions were same as each of first one.
sum.out and sum2.out .... same
sumV.out and sum2.out  .... same

by Az Smith (noreply@blogger.com) at October 11, 2017 03:12 AM

October 10, 2017


Fateh Sandhu

Lab 5 – in Progress

My experience so far has been good seeing as I have managed not to destroy the software while making changes and poking at it. As of now it has been a lot easier than I thought it would be. At first it is daunting to think about working with something of this magnitude. I have been working on the Thimble bug as of right now.

 

Compiling and running it

Compiling it was a bit confusing at first but not undoable. The instructions on the GitHub page of the project are simple and easy to follow. But it I still wished they were a little bit shorter and concise.

Screen Shot 2017-10-06 at 10.22.59

But after I managed to have the required plug-ins up and running. After I got it to work and running it replicated the situations for the bug.

Screen Shot 2017-10-06 at 10.47.12.png

 

I have gone through the code and trying to figure out where to start and implement the changes required in the source code.


by firefoxmacblog at October 10, 2017 07:48 PM


Eric Ferguson

Loops in Assembly Architectures (Lab 3)

The modern C and C++ languages are often taken for granted for their simple syntax and compiler intuition. In Assembly, every variable and value are stored in registers, simple output lines are multiple statements in length, and a number can only be 0 to 9. Below I examine two different architectures, X86_64 and aarch64:


x86_64

The loop I have created that outputs Loop: number thirty times can be found here. Writing this was extremely difficult, constantly fighting a lack of good documentation and examples online. To write this code I viewed many examples online, wrote pseudo code, and wrote the program in C++ before beginning. Debugging was difficult as the errors given were vague or rididculous at times (no // style commenting comes to mind). The Visual Studio compiler for writing C or C++ for example, is alternatively much more informative and allows for easy: break points, live variable information, and detailed information on error codes. I dislike X86 for the limited amount of available registers but like it for it's simpler syntax.

aarch64

The loop previously mentioned can be found here. I found aarch64 harder to write because of it's complicated syntax and was very grateful I wrote the X86_64 code first as it was simply rewriting it. The debugging in aarch64 converesely is superb and actually offers suggestions on how to fix code (i.e. did you mean...?). aarch64 has many available registers however some syntax issues are strange as although x (register#) works in most cases it sometimes uses w (register#) for some lines. I found it not too time consuming to write this however I did get stuck on the division for a little while due to the unneccessarily complicated syntax.


In conclusion, for the amount of time it took to write in assembly the pros of smaller size and faster compile time are not worth the frustration of writing it. This lab however was an excellent experience for seeing how the compiler interprets code from higher level languages.  

by Eric Ferguson (noreply@blogger.com) at October 10, 2017 04:04 AM


Henrique Coelho

Diving into the Linux kernel and making a kernel module

My goal for the next few days/weeks is to dive deeper into the Linux kernel and learn how it works, and I will document whatever I make here. Why? Because I can!

So, first of all, what is the kernel? The kernel is a little program, usually named vmlinuz-[version number]. It has around 5Mb and resides comfortably in your /boot directory, if you are using linux. This program gets loaded by a bootloader (one of the most popular is Grub), which I will learn more later on. The bootloader will pass parameters to the kernel, and in return, the kernel will provide an API, to which we can make System Calls - usually, done by the Standard C Library (in this post I built the GNU C Library from source code, if you want to see it). Another way to interact with the kernel, aside from its API, is through a virtual file system - I am planning to make another post about this later on.

So, what does the kernel do anyway? The kernel is a layer that sits between the hardware and the Standard C Library, it provides a layer that helps us interact with the hardware, peripheral devices, allocate memory, and so on. It also enforces privileges in order to tell if an operation is allowed or not. Some CPU instructions can only be done by the kernel, and not by any software that sits on top of it. Putting it in simple terms, the kernel is an abstraction layer on top of the hardware: an application calls a function in the Standard C Library, which calls the kernel, which interacts with the hardware.

Another important detail is that the kernel is a not a huge program, it has just 5Mb, and has only the essential pieces to support the Operating System. However, we can expand its functionalities through modules - Loadable Kernel Modules (LDMs). These modules are normally used by device drivers, they get "linked" into the kernel so they run in the same scope. We can add and remove modules in the kernel at runtime, and this is what I am going to do in this post.

Modules are built for a specific kernel version, and are conveniently installed in /lib/modules/[your kernel version]. If you use the command uname -r, you can get the name and version of your kernel.

Before I show you the module, some useful commands:

Useful commands
  • lsmod

Will give you a list of all loaded modules you have. This is one of the lines I got:

usbcore               208896  9 uvcvideo,usbhid,snd_usb_audio

These columsn are, respectively: name of the module, size, by how many things (I don't know exactly what they are) it is used, and its dependencies (there were more dependencies there, but I removed some)

  • modinfo

Will give you a detailed description of a module. This is the description for usbcore:

filename:       /lib/modules/4.11.9-1-ARCH/kernel/drivers/usb/core/usbcore.ko.gz
license:        GPL
alias:          usb:v*p*d*dc*dsc*dp*ic09isc*ip*in*
alias:          usb:v*p*d*dc09dsc*dp*ic*isc*ip*in*
alias:          usb:v05E3p*d*dc*dsc*dp*ic09isc*ip*in*
depends:        usb-common
intree:         Y
vermagic:       4.11.9-1-ARCH SMP preempt mod_unload modversions 
parm:           usbfs_snoop:true to log all usbfs traffic (bool)
parm:           usbfs_snoop_max:maximum number of bytes to print while snooping (uint)
parm:           usbfs_memory_mb:maximum MB allowed for usbfs buffers (0 = no limit) (uint)
parm:           authorized_default:Default USB device authorization: 0 is not authorized,
                1 is authorized, -1 is authorized except for wireless USB (default,
                old behaviour (int)
parm:           blinkenlights:true to cycle leds on hubs (bool)
parm:           initial_descriptor_timeout:initial 64-byte descriptor request timeout in
                milliseconds (default 5000 - 5.0 seconds) (int)
parm:           old_scheme_first:start with the old device initialization scheme (bool)
parm:           use_both_schemes:try the other device initialization scheme if the first
                one fails (bool)
parm:           nousb:bool
parm:           autosuspend:default autosuspend delay (int)
  • insmod

Used to insert (load) modules from a path

  • rmmod

Used to remove modules

  • modprobe

Used to load/unload modules. It is more powerful than insmod and rmmod, and can handle dependencies

Making the module

A module is nothing more than a C program. It basically starts as two functions: one for initialising, and one for exiting (in case you have some cleanup to do).

#include <linux/init.h>
#include <linux/module.h>

int initmodule(void)
{
  return 0;
}

void exitmodule(void)
{

}

module_init(initmodule);
module_exit(exitmodule);

This is the basic structure. Notice how we are importing two headers: linux/init.h and linux/module.h. These are headers that will give us some parts of the API to use in the kernel! Another interesting thing is: why am I not importing stdio.h? Well, this is because we are in a different layer: this program will not sit on top of the kernel and the Standard C Library - it is executed WITH the kernel, so there is no stdio.h!

Alright. I'm going to add a little more stuff to this code now. How about some documentation about the module?

MODULE_AUTHOR("Henrique S. Coelho");
MODULE_DESCRIPTION("A completely useless kernel module");
MODULE_LICENSE("GPL");

It would also be nice to add some functionality to it. How about this:

  • The module will ask your name, and will greet you like Hello Joe!; if no name is provided we will assume the name "there" so it will be displayed as Hello there! - hacky, I know! I love it.
  • The module will ask how many times you want this message to be printed. The default will be 5
  • The module will ask if it should say "goodbye" when it exits. The default is true

These options will be passed as arguments to the module.

// Default arguments
static int   repeats = 5;
static char  *name   = "there";
static bool  saybye  = true;

// default value, type, and permission
// S_IRUGO = value is read only
module_param(repeats, int,   S_IRUGO);
module_param(name,    charp, S_IRUGO);
module_param(saybye,  bool,  S_IRUGO);

After making the logic, this is how our module looks like:

// mymodule.c

#include <linux/init.h>
#include <linux/module.h>

static int   repeats = 5;
static char  *name   = "there";
static bool  saybye  = true;

// S_IRUGO = value is read only
module_param(repeats, int,   S_IRUGO);
module_param(name,    charp, S_IRUGO);
module_param(saybye,  bool,  S_IRUGO);

MODULE_AUTHOR("Henrique S. Coelho");
MODULE_DESCRIPTION("A completely useless kernel module");
MODULE_LICENSE("GPL");

int initmodule(void)
{
  unsigned short i;
  for (i = 0; i < repeats; i++)
    printk("Hello %s!\n", name);
  return 0;
}

void exitmodule(void)
{
  if (saybye)
    printk("Bye bye!\n");
}

module_init(initmodule);
module_exit(exitmodule);

Another detail you may have noticed: what is printk? This is a function used by the kernel to print messages (no, no printf here). These messages will be directed to the buffer of the kernel.

Awesome! Now, how do we compile this?

To compile this thing, we will make a Makefile in this directory. It should contain this line:

obj-m := mymodule.o

mymodule.o is the name of my module (conveniently called mymodule.c) after it is compiled.

We will not run this Makefile - the kernel will. We will use a Makefile from the kernel, which will use this Makefile to compile the module. Confusing? Yes, it is.

This is how we call the Makefile of the kernel:

$ make -C /lib/modules/`uname -r`/build M=$(PWD) modules

Some explanation:

  • -C /lib/modules/uname -r/build Tells make where the Makefile is. The makefile for the kernel is located in /lib/modules/[my kernel version]/build - I used the command uname -r as a shortcut to get the name and version of my kernel
  • -M=$(PWD) Tells make where to build the module. In this case: in my current directory
  • modules Tells which section of the Makefile to execute (remember make install?). We are telling make to make a module

So, again: we are executing the kernel's Makefile, which will execute the Makefile of our project.

Now, I like to automate things, so I made this Makefile instead:

all: mymodule.c
    make -C /lib/modules/`uname -r`/build M=$(PWD) modules

clean:
    make -C /lib/modules/`uname -r`/build M=$(PWD) clean

obj-m := mymodule.o

Nice. I can just call make and it builds the module. make clean will clean the directory.

After running it, I get this lovely output:

make -C /lib/modules/`uname -r`/build M=/home/hscasn/Desktop/kmodule modules
make[1]: Entering directory '/usr/lib/modules/4.11.9-1-ARCH/build'
  CC [M]  /home/hscasn/Desktop/kmodule/mymodule.o
  Building modules, stage 2.
  MODPOST 1 modules
  LD [M]  /home/hscasn/Desktop/kmodule/mymodule.ko
make[1]: Leaving directory '/usr/lib/modules/4.11.9-1-ARCH/build'

No erros, unlike this sentence!

Cool. So, we got a .ko file. This stands for Kernel Object, and this is our module.

Before I execute it, I will open a terminal and type the following command:

$ dmesg -w

This command prints the message buffer from the kernel (our messages will be there). The -w option will make the command wait and print new lines as they come.

Now we can finally load the module:

$ sudo insmod ./mymodule.ko

Immediately, this pops up in my other terminal (with dmesg running):

[23430.693089] Hello there!
[23430.693090] Hello there!
[23430.693090] Hello there!
[23430.693090] Hello there!
[23430.693091] Hello there!

It is alive! Let's try unloading the module:

~/kmodule $ sudo rmmod mymodule

Result:

[23434.868493] Bye bye!

Now I will try with other arguments: this time my name will be "Joe", I want the message to be printed one time, and I do not want the module to say goodbye:

~/kmodule $ sudo insmod ./mymodule.ko repeats=1 name=Joe saybye=0
~/kmodule $ sudo rmmod mymodule

Output:

[23448.108956] Hello Joe!

Again. This time, my name will be David, I want the message to be repeated 3 times, and I want a goodbye message:

~/kmodule $ sudo sudo insmod ./mymodule.ko name=David repeats=3 saybye=1
~/kmodule $ sudo rmmod mymodule

And the output:

[23468.661605] Hello David!
[23468.661605] Hello David!
[23468.661605] Hello David!
[23472.709457] Bye bye!

Today was a good day.

by Henrique at October 10, 2017 02:57 AM

October 09, 2017


Michael Pierre

Fix all the bugs! My work with Mozilla Thimble so far.

The final decision has been made and I have decided to jump in fixing Mozilla Thimble bugs. So far it has been fun, interesting, hard, and overall a great learning experience. My first bug which was to add code to check if the brackets server is running and notify the user in the case it isn’t when they run “npm start” in Thimbles path. The addition of code to accomplish this will help minimize the headache of new developers to thimble because the brackets server is required to create a new project and often it doesn’t “click” that you need both the brackets and thimble server running. This bug wasn’t extremely hard however it was a great first bug to get contributing to open source projects. The Thimble developer community was very helpful and they seemed like they wanted me to learn and succeed in fixing this bug which was really nice. Even in general with the pull requests I’ve filed I’ve got a lot help with using git and making changes so the code can be merged with the master project. first pull request

Currently I have two pull requests merged with Thimble master on GitHub and it’s a great feeling. I could see myself jumping into different projects down the road just to see what I can accomplish. Working on these bugs has taught how some big web applications are structured as well as introduced me to node.js which is something I have never worked with before. One thing I did have trouble with was getting the login and publish servers to run which took hours of testing and working with classmates to fix. I could get the brackets and Thimble servers to run however login and publish functionality would either not connect with the server or it would be a blank page. To fix this I tried reinstalling everything associated like vagrant, node.js, and virtual box but I still ran into the same problem. Eventually I was convinced that it had to be a problem with Windows and started to work on dual booting Ubuntu so I could try it on a different OS. Ubuntu had a host of its own problems and I couldn’t get past the “npm run build” step and kept getting weird errors. Now amongst my confusion and frustration one of my classmates made a post in the Thimble chat which solved my problem. Basically, I had to make an app.bundle.js file from https://id.webmaker.org/assets/app.bundle.js and add it to “thimble.mozilla.org\services\id.webmaker.org\public.” Gideon Thomas a Thimble developer mentioned that this file should be created at runtime so this could be a bug within itself which is interesting. Through this bug fixing process it has been apparent to me that fixing some bugs lead to more bugs or at least a complementary bug. For example the brackets error notification I added lead to the bug of the user needing to install “is-reachable” with npm install because I used that packages features to add the notification. The bug wasn’t that hard to fix with me just having to add is-reachable to the devDependencies in the package.json file. It could have been hard but the community and people like David Humphrey and Gideon Thomas did excellent jobs of helping and guiding me with the fix. Overall, I hope to work on more bugs and contribute more to Open Source community because my experience has been great so far.


by michaelpierreblog at October 09, 2017 11:14 PM


Matthew Marangoni

Auto-Vectorization and SIMD Vectorization in GCC

In this exercise we are exploring single-instruction/multiple-data (SIMD) vectorization, along with the auto-vectorization capabilities of the GCC compiler. For anyone not familiar with the principals of vectorization, this Wikipedia article sums it up pretty nicely.

I've created a short program which does the following:

  • creates two 1000-element arrays and fills them with random integers ranging from -1000 to +1000
  • sums the two arrays element-by-element and stores the result in a third array
  • sums all elements of the third array and prints out the result
For experimental purposes, I've created 3 different versions of the above program to see if any improvements can be made by inspecting the disassembly instructions. Each of the programs were compiled on AArch64, and the command used to compile them is:

gcc -O3 -o lab5 lab5.c

How do we know this will (attempt) to vectorize our program? As stated in the gcc article on vectorization, vectorization is enabled by the flag -ftree-vectorize and ALSO by default when using -O3 optimization (which we used above).

Below is the source code for each version of the program, followed by the disassembly:

Version 1 source code:


Version 1 disassembly:

Version 2:

Version 2 disassembly:






































Version 3:


































Version 3 disassembly:



















































Observations


Looking at these 3 disassembly's, its clear that version 1 has the fewest instructions, but it doesn't look like any vectorization is happening, so we'll ignore version 1 for the purposes of this analysis. Version 3 has the next fewest instructions (62) and does appear to have vectorization, so we'll dig into this version of our program and see what we can learn. Below is a annotated version of our disassembly for version 3 of our program:

0000000000400580 <main>:
/* moving a value to stack pointer */
  400580:       d285e410        mov     x16, #0x2f20                    // #12064
  400584:       cb3063ff        sub     sp, sp, x16

/* seed rand - srand(time(NULL)); */
  400588:       d283f000        mov     x0, #0x1f80                     // #8064
  40058c:       a9007bfd        stp     x29, x30, [sp]
  400590:       910003fd        mov     x29, sp
  400594:       a90153f3        stp     x19, x20, [sp,#16]

/* loop 1 - filling arrays with random values */
/* x[i] = rand() % RANGE + (MIN); */
  400598:       529a9c74        mov     w20, #0xd4e3                    // #54499
  40059c:       a9025bf5        stp     x21, x22, [sp,#32]
  4005a0:       72a83014        movk    w20, #0x4180, lsl #16
  4005a4:       a90363f7        stp     x23, x24, [sp,#48]
  4005a8:       910103b5        add     x21, x29, #0x40
  4005ac:       8b0003b7        add     x23, x29, x0
  4005b0:       913f83b6        add     x22, x29, #0xfe0
  4005b4:       d2800018        mov     x24, #0x0                       // #0
  4005b8:       5280fa33        mov     w19, #0x7d1                     // #2001
  4005bc:       d2800000        mov     x0, #0x0                        // #0
  4005c0:       97ffffd4        bl      400510 <time@plt>
  4005c4:       97ffffe7        bl      400560 <srand@plt>

/* x[i] = rand() % RANGE + (MIN); */
  4005c8:       97ffffda        bl      400530 <rand@plt>
/* SIMD VECTOR INSTRUCTION ----- */
  4005cc:       9b347c01        smull   x1, w0, w20
  4005d0:       9369fc21        asr     x1, x1, #41
  4005d4:       4b807c21        sub     w1, w1, w0, asr #31
  4005d8:       1b138020        msub    w0, w1, w19, w0
  4005dc:       510fa000        sub     w0, w0, #0x3e8
  4005e0:       b8387aa0        str     w0, [x21,x24,lsl #2]

/* y[i] = rand() % RANGE + (MIN); */
  4005e4:       97ffffd3        bl      400530 <rand@plt>
/* SIMD VECTOR INSTRUCTION ----- */
  4005e8:       9b347c01        smull   x1, w0, w20
  4005ec:       9369fc21        asr     x1, x1, #41
  4005f0:       4b807c21        sub     w1, w1, w0, asr #31
  4005f4:       1b138020        msub    w0, w1, w19, w0
  4005f8:       510fa000        sub     w0, w0, #0x3e8
  4005fc:       b8387ac0        str     w0, [x22,x24,lsl #2]

/* loop condition  - i++, i < MAX */
  400600:       91000718        add     x24, x24, #0x1
  400604:       f10fa31f        cmp     x24, #0x3e8
  400608:       54fffe01        b.ne    4005c8 <main+0x48>

/* loop 2 - adding the values from each array element */
  40060c:       d2800000        mov     x0, #0x0                        // #0
  400610:       3ce06aa0        ldr     q0, [x21,x0]
  400614:       3ce06ac1        ldr     q1, [x22,x0]
/* z[i] = x[i] + y[i]; */
/* VECTORIZED! -------------------------⌄------⌄------⌄ */
  400618:       4ea18400        add     v0.4s, v0.4s, v1.4s
  40061c:       3ca06ae0        str     q0, [x23,x0]

/* loop condition - i++, i < MAX */
  400620:       91004000        add     x0, x0, #0x10
  400624:       f13e801f        cmp     x0, #0xfa0
  400628:       54ffff41        b.ne    400610 <main+0x90>

/* loop 3 - sum values in array - sum += z[i]; */
/* VECTORIZED! -------------------------⌄ */
  40062c:       4f000400        movi    v0.4s, #0x0
  400630:       913e82e0        add     x0, x23, #0xfa0
  400634:       3cc106e1        ldr     q1, [x23],#16
/* VECTORIZED! ------------------------⌄------⌄------⌄ */
  400638:       0ea11000        saddw   v0.2d, v0.2d, v1.2s
  40063c:       eb17001f        cmp     x0, x23
/* VECTORIZED! ------------------------⌄------⌄------⌄ */
  400640:       4ea11000        saddw2  v0.2d, v0.2d, v1.4s
  400644:       54ffff81        b.ne    400634 <main+0xb4>

/* printf("%d\n", sum) */
/* VECTORIZED! -----------------------------⌄ */
  400648:       5ef1b800        addp    d0, v0.2d
  40064c:       d285e410        mov     x16, #0x2f20                    // #12064
  400650:       a94153f3        ldp     x19, x20, [sp,#16]
  400654:       90000000        adrp    x0, 400000 <_init-0x4d8>
  400658:       a9425bf5        ldp     x21, x22, [sp,#32]
  40065c:       9120e000        add     x0, x0, #0x838
/* VECTORIZED! -----------------------------⌄ */
  400660:       4e083c01        mov     x1, v0.d[0]
  400664:       a94363f7        ldp     x23, x24, [sp,#48]
  400668:       a9407bfd        ldp     x29, x30, [sp]
  40066c:       8b3063ff        add     sp, sp, x16
  400670:       17ffffc0        b       400570 <printf@plt>
  400674:       00000000        .inst   0x00000000 ; undefined

Pretty neat stuff, we can identify where our program was vectorized by looking for the SIMD vector registers. What do SIMD vector registers look like, and how can we understand them? Here's a pretty good explanation from the ARM Reference Manual:














What about the SIMD vector instructions? In our disassembly, the folowing SIMD vector instructions can be identified:
  • addp (add pairwise)
  • movi (move immediate)
  • smull (signed multiply long)
  • saddw saddw2 (signed add wide)

Reflection

It's quite the rabbit hole when you start digging into some of the things you can do with your compilers, and how you can optimize code. When working with things like loops that are going to execute (potentially) a large amount of times, every little improvement can be significant. It definitely takes a deeper understanding of your platform and compiler, and may result in your code being a bit more verbose/complex, but the results can definitely be worth the trouble (it depends on your situation!). Assembly is never that exciting to break down and understand, but this has definitely been a worthwhile exercise.

by Matthew Marangoni (noreply@blogger.com) at October 09, 2017 09:51 PM


Saeed Mohiti

Assembler Lab

Coding an assembler program is much more difficult than the higher level language, because in the high level programming you don’t need to take care of lots of things such as processor or memory and etc.

In this lab after building and running the different version of assembler aarch64 and x86_64 and C version, I should take a look at the source code with using “objdump -d” to compare which was the same as Lab2.

Then we had to provide a code to print loop with index values from 0 to 9.

In order to print the loop index value, conversion from an int to digit character is needed. In ASCII/ISO-8859-1/Unicode UTF-8, the digit characters are between 48-57 (0x30-0x39). Then we need to assemble the message to be printed for each line.

This is a code for the x86_64 architecture:

.text                             

.globl _start

start = 0                       /* starting value for the loop index */

max = 10                      /* loop exits when the index hits this number (loop condition is i<max) */

_start:

mov $start,%r15      /* loop index */

loop:

mov $len,%rdx            /* message length */

mov $msg,%rsi           /* message location */

mov $1,%rdi              /* file descriptor stdout */

mov $1,%rax              /* syscall sys_write */

syscall

inc %r15                     /* increment index */   

mov %r15,%r14

add $’0′,%r14            /* convert r14 to ascii */

mov $num,%r13

mov %r14b,(%r13)

cmp $max,%r15          /* see if we’re done */

jne loop                         /* loop if we’re not */

mov $0,%rdi               /* exit status */

mov $60,%rax               /* syscall sys_exit */

syscall

.section .data

msg: .ascii “Loop: 0\n”

len = . – msg

num = msg + 6

and this one is for aarch64 architecture:

.text

.globl _start

start = 0              /* starting value for the loop index */

max = 10           /* loop exits when the index hits this number (loop condition is i<max) */

_start:

mov x19, start           /* loop index */

loop:

adr x1, msg               /* message location (memory address) */

mov x2, len               /* message length (bytes) */

mov x0, 1                  /* file descriptor: 1 is stdout */

mov x8, 64               /* write is syscall #64 */

svc 0                           /* invoke syscall */

add x19, x19, 1           /* increment index */

mov x20, x19

add x20, x20, ‘0’            /* increment index */

adr x21, msg               /* message location (memory address) */

strb w20, [x21,6]        /* store byte in msg, offset by 6 */

cmp x19, max              /* see if we’re done */

bne loop                       /* loop if we’re not */

mov x0, 0                      /* status -> 0 */

mov x8, 93                    /* exit is syscall #93 */

svc 0                  /* invoke syscall */

.data

msg: .ascii “Loop: 0\n”

len = . – msg

 

the output for both code is same:

 

Loop: 0

Loop: 1

Loop: 2

Loop: 3

Loop: 4

Loop: 5

Loop: 6

Loop: 7

Loop: 8

Loop: 9

 

After that we had to modify the code to print and count the loop from 0 to 30, in order to do this task should do this, use the div instruction, which takes the dividend from rax and the divisor from register supplied as an argument.

This is the modified code for aarch64:

.text

.globl _start

start = 0                       /* starting value for the loop index */

max = 31                        /* loop exits when the index hits this number (loop condition is i<max) */

_start:

    mov     x19,start           /* loop index */

    mov     x20,0                       /* copy ascii 0 to x20 to use as a comparison for quotient */

    adr     x25,msg

loop:  

    mov     x21,10              /* load x21 with 10 */

    udiv    x22,x19,x21         /* divide x19 by x21 and store quotient in x22 */    

    add     w24,w22,0x30        /* convert quotient to ascii */

    cmp     w24,’0′             /* compare ascii converted quotient to ascii 0 */

    beq     contd               /* if 0, jump to contd */

    strb    w24,[x25,6]         /* if not 0, include in the output */

contd:

    msub    x23,x22,x21,x19     /* load x23 with x19-(x22*x21) = divident – (quotient * 10) (get remainder)*/

    add     w26,w23,’0′         /* convert remainder to ascii */

    strb    w26,[x25,7]         /* store byte in msg, offset by 7 */

    mov     x0, 1               /* file descriptor: 1 is stdout */

    adr     x1, msg                 /* message location (memory address) */

    mov     x2, len                 /* message length (bytes) */

    mov     x8, 64                  /* write is syscall #64 */

    svc     1                            /* invoke syscall */

    add     x19,x19,1           /* increment index */

    cmp     x19,max             /* see if we’re done */

    bne     loop                /* loop if we’re not */

    mov     x0, 0                    /* status -> 0 */

    mov     x8, 93                  /* exit is syscall #93 */

    svc     0                            /* invoke syscall */

.data

msg:    .ascii      “Loop:   \n”

len = . – msg

and then we had to modify it for x86_64:

the coding is same as the aarch64 in terms of concept and logic just there are some difference can see in instructions, syntax, orders, structures and functions.

 

.text

.globl    _start

start = 0                       /* starting value for the loop index */

max = 31                        /* loop exits when the index hits this number (loop condition is i<max) */

_start:

    mov     $start,%r15         /* loop index */

    mov     $’0′,%r12                 /* copy ascii 0 to r11 to use as a comparison for quotient */

loop:

    mov     $0,%rdx             /* Clear the remainder */

    mov     %r15,%rax           /* Copy r15 into rax to be used as divident */

    mov     $10,%r10            /* Store 10 into r12 to be used as divisor */

    div     %r10                /* divide value of r15 by 10 */    

    mov     %rax,%r13           /* copy quotient to r13 */

    add     $’0′,%r13           /* convert r13 to ascii */

    cmp     %r12,%r13           /* check if quotient is 0 */

    je      contd               /* if equal, skip to contd lable */

    mov     %r13b,msg+6         /* if quotient is not 0, set it as the first number */

contd:

    mov     %rdx,%r14           /* copy remainder to r14 */

    add     $’0′,%r14           /* convert r14 to ascii */

    mov     %r14b,msg+7         /* move a single byte into memory position of msg+7 */

    mov     $len,%rdx           /* message length */

    mov     $msg,%rsi           /* message location */

    mov     $1,%rdi               /* file descriptor stdout */

    mov     $1,%rax               /* syscall sys_write */

    syscall

    inc     %r15                /* increment index */

    cmp     $max,%r15           /* see if we’re done */

    jne     loop                /* loop if we’re not */

    mov     $0,%rdi             /* exit status */

    mov     $60,%rax            /* syscall sys_exit */

    syscall

.data

msg:    .ascii      “Loop:   \n”

            len = . – msg

the output for both code is same and the result is:

Loop:  0

Loop:  1

Loop:  2

Loop:  3

Loop:  4

Loop:  5

Loop:  6

Loop:  7

Loop:  8

Loop:  9

Loop: 10

Loop: 11

Loop: 12

Loop: 13

Loop: 14

Loop: 15

Loop: 16

Loop: 17

Loop: 18

Loop: 19

Loop: 20

Loop: 21

Loop: 22

Loop: 23

Loop: 24

Loop: 25

Loop: 26

Loop: 27

Loop: 28

Loop: 29

Loop: 30

The assembler coding is more difficult to compare with other languages, especially when we get error while running the code, debugging is so complicated. With assembler programming we need to deal with registers with their specific names.

 


by msmohiti at October 09, 2017 08:16 PM


Ray Gervais

The Open Source Audio Project (Idea!)

Hello there! If you’re not new to blog, or I haven’t changed any of the main headings for the website at the time of this article, you’d be aware just how big of an advocate I am of FOSS technologies on our everyday mediums. Android devices running AOSP-centric ROMs, Linux workstations running Fedora 26, and my non-FOSS hardware running as many OSS technologies as possible such as Inkshot, Visual Studio Code, Kdenlive, Firefox, etc. Ironically, the one theme which I hadn’t played with for a few years now was audio production in an open source environment.

Why is this ironic? Because audio production is what first introduced me to Linux & FOSS technologies. In my cheap attempt to find a well developed & refined DAW which could be legally accessible by a high schooler, I discovered Audacity, Ardour, LMMS, and Muse; all of which pointed the way towards Ubuntu, Open SUSE, Fedora, and Linux itself. My world changed quickly from these encounters, but I always turned back to Cubase, FL Studio, Studio One when I wanted to record or mix a track for a friend.

Recently, a fellow musician and close friend had successfully encouraged me to get back into playing, recording, and mixing. It had been at least two years since I took such a hobby so seriously, but with his encouragement my YouTube playlists quickly became packed with refresher material, mixing tips, and sounds from the past. In consequence, We recorded in the span of a single day a cover of Foster the People’s ‘Pumped Up Kicks’; vocals done by the impressive Shirley Xia. The track can be found here for those curious: Pumped Up Kicks – FtP Cover by Ray Gervais

It was recorded & mixed in a Reaper session which turned out much better than expected with only the use of stock ReaPlugins. This begged the question, which would hit like a kick drum over and over all week, could this level of production quality be possible using only FOSS? Would Ardour be able to keep up with my OCD for multi-tracking even the simplest of parts?

The 1st Idea

First idea is to export the Reaper stems as .WAV files into Ardour, and get a general mixing template / concept together based on previous trials / settings. This will also help me to confirm the quality of the native plugins, and if I should be worried about VST support in the case the native plugins don’t meet the sound which Reaper did. I’m both incredibly nervous and excited to see the end result, but fear that most of the time will be spent configuring & fixing JACK, ALSA, or performance issues on the Fedora machines.

If all goes well, I’ll probably upload the track as a rerelease mix with the settings used & various thoughts.

The 2nd Idea

Recording a track natively (via MBox 2 OSS drivers) into Ardour, and compose, mix, master all using Ardour & open source software exclusively. I feel obligated to point out that if I were to use VST for any reason, they must be freeware at a bare minimum. No paid, freemium, or proprietary formats (looking at you Kontakt).

I wonder if genres which don’t demand pristine sounds such as lo-fi, ambient, post-rock, or even IDM would be easier to manage compared to that of an indie sound, or a angry metal sound. My first test would be probably dwell in the electronic genre while I setup the recording interface to work best with the configuration (reducing latency where possible, dealing with buffer overflows).

DAW Applications & Considerations

In this small conclusion, I simply want to list the other possible applications / technologies to consider in the case that the primary ones mentioned above do not work as intended.

DAW (Digital Audio Workstation) Alternatives

  • Audacity: One of the most popular audio editors in the world, known for it’s simplistic interface, ease of use plugins, and it’s usefulness as a audio recording application for mediums such as podcasts, voice overs, etc. I’ve only used Audacity twice, both times just to experiment or to record a quick idea on the fly. See, Audacity isn’t mean to be the direct answer to common DAW paradigms such as track comping. It’s not meant to be used to fix a bad rhythm either.  Source code: https://github.com/audacity
  • LMMS: A open source alternative to FL Studio. Useful for sequencing, and has built in VST3 support as of recent versions. I had used LMMS in the past for quick ideas and testing chords out through various loops, and dismissed using it further due to stability issues at the time (Circa 2013). I’m curious what state the project is in now.  Source code: https://github.com/LMMS/lmms
  • Qtractor: A multitrack audio and MIDI sequencing tool, developed on the QT framework with C++. This DAW I am the least experienced with, but some seem to endorse it for electronic music production on Linux.  Source code: https://github.com/rncbc/qtractor

I’m excited for this experiment, and hope to follow up in a much more frequent article release period. My only concern is the end product, and if I’ll have a listenable song using only OSS that is not subpar in quality. Documenting the process will also help myself to understand the strengths and drawbacks to this challenge. Even if just doing a modern remix of the original track would be a unique experience, since I have all the recorded stems in multitrack format already. Can’t wait to start!

by RayGervais at October 09, 2017 05:44 PM


Chun Sing Lam

SPO600 Lab 4 – Build and Test Open Source Software

Build and test an open source software

For the first part of this lab, I need to build and test an open source software. I will be using Fedora to build and test open source software. I need to choose an open source software package from an open source project. I have decided to choose the grep package from the GNU Project. Grep is used to search for lines in a file that match a specified pattern and then outputs these lines. Here are the steps to compile and test grep:

  1. From a local directory (eg. ~/spo600/lab4), download the grep package by issuing the command “wget https://ftp.gnu.org/gnu/grep/grep-3.1.tar.xz&#8221; in the command line. We need to find out the URL for the grep package before using the wget command.
  2. Issue the command “tar xvf grep-3.1.tar.xz” to extract and uncompress the package to the folder “grep-3.1”.
  3. The next step is to figure out how to build the software. After the package has been extracted, we see a lot of files under the main folder “grep-3.1”. There is a “README” file, so we issue the command “cat README” to view the contents of this file. This file tells us general information about grep as well as important files and their uses. It tells us that the “INSTALL” file contains information about compilation and installation, so we issue the command “cat INSTALL” to view the contents of this file. The “INSTALL” file provides detailed instructions on how to compile and install the software. For this lab, we will not be installing the software, so we can ignore sections about software installation. This file tells us that the command “make check” is used to run self-tests that are included with the package. This file also tells us that it takes two steps to compile the package. The first step is to use the “cd” command to move to the directory that contains the “configure” file and then issue the command “./configure”. This will run the shell script to configure the package and create the “Makefile” file based on my system information. The second step is to issue the command “make” to compile the package using the “Makefile” file. Now that we have read the “INSTALL” file, we can issue the command “./configure” from the directory “grep-3.1” (eg. ~/spo600/lab4/grep-3.1) to configure the package and create the “Makefile” file for my system:

grep1

  1. After “configure” runs successfully, we use the command “make” to compile grep:

grep2

  1. After the compilation has completed, we are ready to test our grep build. We issue the command “make check” to run the test scripts that are provided with the package and we see that most tests are successful:

grep3

Next, we will try to use the grep utility that we have built. We issue the command “cd src” to move to the “src” directory, which is where the grep executable from our build is located. We then issue the command “./grep the dfasearch.c” to run our version of grep. This grep command will search for all lines in the file “dfasearch.c” that contain the string “the” and display the results on the screen. It seems like we get the correct results, so our grep build works:

grep4

I have grep installed on my system, so let’s use the installed version of grep and see what results we get. We issue the command “grep the dfasearch.c” to run the installed version of grep. We get the same results as above, which proves that the results from our version of grep are correct and we have built grep successfully. The only difference in terms of output is that this time the search string “the” is highlighted in red in the output:

grep5

Here, we see that the compilation process is fairly straightforward. We download and extract the software package that we want and then run a couple of commands to compile the software. We have not been asked to install any dependencies during the compilation process. We just need to read the “INSTALL” file to obtain instructions on how to compile and install the software.

Build and test glibc

For the second part of this lab, I need to build and test GNU C Library (glibc). Here are the steps to compile and test glibc:

  1. We issue the command “mkdir ~/spo600/lab4/src” to create the directory “src” to store the source code.
  2. Issue the command “cd ~/spo600/lab4/src” to move to the “src” directory.
  3. Issue the command “git clone git://sourceware.org/git/glibc.git” to download the glibc source code to the “src” directory.
  4. As I already mentioned, the “INSTALL” file provides instructions for compilation and installation. We need to issue the command “cd glibc” to move to the “glibc” directory and then issue the command “cat INSTALL” to view the contents of this file. It tells us that the GNU C Library cannot be compiled in the source directory. We need to build it in another directory, so we will create a build directory. Same as grep, the file tells us to run the “configure” script, use “make” to compile glibc and then use “make check” to test glibc. The file also tells us that we should install and update the following tools before building the GNU C Library: make, GCC, binutils, texinfo, gawk, Perl, and sed. In addition, we need to install and update the following tools before we can run test scripts after building the GNU C Library: Python, PExpect and GDB.
  5. Use the “sudo yum install” and “sudo yum update” commands to install and update all of the tools listed above.
  6. In the end, I need to run a program to verify that I am testing the newly built glibc. Therefore, I will introduce a small change to the source code for a function in order to differentiate between the new glibc and system glibc. I decide to change the code for the rand() function so that the function returns the number 5 instead of a random number. The path to the rand() function source file is “~/spo600/lab4/src/glibc/stdlib/rand.c” and here is the contents of the file after the change:

glibc3

  1. Issue the command “mkdir -p ~/spo600/lab4/build/glibc” to create the build directory.
  2. Issue the command “cd ~/spo600/lab4/build/glibc” to move to the build directory.
  3. Issue the command “~/spo600/lab4/src/glibc/configure –prefix=/home/cslam4/spo600/lab4/build/glibc” to run the “configure” script to configure the package and create the “Makefile” file for my system. The “–prefix” option tells “configure” where to install the GNU C Library:

glibc1

  1. After “configure” runs successfully, we use the command “make” to compile glibc:

glibc2

  1. After compilation is complete, I create the program “randnum.c” to test the newly built glibc:
#include <stdio.h>
#include <stdlib.h>

int main () {
        // Print 10 random numbers
        int i;
        for( i = 0 ; i < 10 ; i++ ) {
                printf("Number %d: %d\n", i + 1, rand());
        }
        return 0;
}

This program will display 10 random numbers. I can use the “testrun.sh” script in the build directory to run my program using the newly built glibc. To do that, I issue the command “./testrun.sh ~/spo600/lab4/randnum” from the build directory. We see that we get the number 5 instead of a random number, which means that I am using the newly built glibc:

glibc4

When I run my program using the system glibc using the command “./randnum”, I get 10 random numbers:

glibc5

It was great to see the two different results!

Compared to grep, compiling and testing glibc is more complicated and involves more steps. For glibc, we need a directory to store the source code and a different directory to compile glibc. We also need to install and update some tools in order to compile glibc and run some test scripts. For testing, we need to change the source code for a function to alter the function’s behaviour in order to test our version of glibc. It took me a bit of time to find out where the source file for the function is located.

Override and multiarch

There may be more than one implementation for each function in a shared library due to the presence of different architectures and each version optimized for a specific architecture. We can override a version of a function with another such as our own version. The “ldd” command is used to show the shared library dependencies for a program. When we execute a program, the dynamic linker is being invoked. The dynamic linker will search for these shared libraries based on specific configuration files and environment variables, load these libraries into memory and link everything together before executing the program. The dynamic linker allows us to load other libraries and override functions in shared libraries using the LD_PRELOAD environment variable. To override a function, we need to build a shared library with our version of the function. We use the “gcc” and “ld” commands to build the source file into a shared library. Next, we set the LD_PRELOAD environment variable to the path of our shared library to load our shared library when calling the function. It is important to note that this override method does not work with system calls such as printf and scanf. There may be other methods to override functions but that will require more research.

Currently, many software packages are built for a specific architecture. We can only install one version of a package on one system. Multiarch allows more than one platform-specific version of a package to be installed on the same system. Cross-architecture dependencies can be installed and cross compilation is possible. We can install a package built for another architecture on our system. With multiarch, we can install a library of different architectures on a single system. In the glibc source directory, the “sysdeps” directory contains a number of architecture-specific directories, each of which contains functions written in C or assembler for the specific architecture. When we run the “configure” script before compiling glibc, it searches for these architecture-specific directories for architecture-specific functions to use based on the operating system, the manufacturer’s name, and the CPU type. In the architecture-specific directory, we also see the “multiarch” directory and it contains more implementations of functions written in C or assembler. The “multiarch” directory also contains some IFUNC files. Indirect function support (IFUNC) is a feature that allows an implementation of a function to be selected at runtime using a resolver function. When we call a function, the dynamic loader runs a resolver function to select the best implementation of that function to be used by the application.


by chunsinglam at October 09, 2017 03:50 AM


Henrique Coelho

Auto-vectorization with GCC on Aarch64, and also my SPO600 Lab 5

In my last post I talked about some optimizations that the C compiler can do to our code. This time, I will talk a little about vectorization and how the compiler can also do it automatically.

First, I want to talk about what vectorization is. Suppose we have the following code:

short arr1[] = { 1, 1, 2, 2 };
short arr2[] = { 3, 4, 4, 3 };

Now, say we want to sum the elements of these arrays (1 + 3, 1 + 4, 2 + 4, and 2 + 3) into the array c. How could we do this?

A simple approach is to do the following:

for (size_t i = 0; i < 4; i++) c[i] = a[i] + b[i];

This is great, but we are not being as efficient as we could be. Let's think about memory for a bit: a short is -typically- 2 bytes long (16 bits), and words for modern processors are 64 bits long. This means that for every iteration, we are loading 16 bit numbers into 64 bit registers:

I am going to use decimal notation for some examples, because it is much easier to read

// Imagine we are loading the numbers "4" and "1" in registers
// in order to add them. This is what the compiler would do:

      16 bit            16 bit           16 bit           16 bit
|----------------|----------------||----------------|----------------|
        4                 0                0                 0 // <- a[]
        1                 0                0                 0 // <- b[]
        5                 0                0                 0 // <- c[] (result)

This is a lot of wasted space. If we somehow loaded more numbers into that wasted space, we would be doing 4 operations at a time, instead of only 1:

      16 bit            16 bit           16 bit           16 bit
|----------------|----------------||----------------|----------------|
        1                 1                2                 2 // <- a[]
        3                 4                4                 3 // <- b[]
        4                 5                6                 5 // <- c[] (result)

This would be great! We could theoretically align the numbers in memory (or not, if they are already aligned), load them into the register, and do these operations in parallel! But there is a catch: if the numbers are large enough, they will overflow to the left, ruining the whole operation:

//                                       0 + 0          65535 *+ 1
      16 bit            16 bit           16 bit           16 bit
|----------------|----------------||----------------|----------------|
 ???????????????? ????????????????  0000000000000000 1111111111111111
 ???????????????? ????????????????  0000000000000000 0000000000000001

 ???????????????? ????????????????  0000000000000000 0000000000000000 // <- what we want
 ???????????????? ????????????????  0000000000000001 0000000000000000 // <- what we get
//                                                 ^- This should not be here!

// * assuming unassigned

Checking for this pitfall manually would be too tedious and too complicated - it would likely make the process slow again. It would just not be worth it. However, modern CPUs have special registers for this kind of operation, they are called vector registers: we can tell them how long our data is, and it will avoid this overflow. Since they are optimized for a high volume of data, they can process larger words, like 128 bits instead of 64. The process of using these registers to make operations in parallel is called vectorization - it is great for applying single instructions to multiple data (SIMD).

Luckily, compilers like GCC have built-in vectorization - this means that if we are using optimization, the compiler will try to vectorize operations when it finds necessary. To make this possible, however, we need to adopt some good practices. These practices will make sure that the vectorization process is easy and will not carry extra overhead, making it too complicated and ruining the performance. The two main practices I would like to talk about is memory alignment and data dependencies.

1. Memory alignment

Memory is just a long strip of data. Say that this is how our memory looks like, with both our arrays in it:

 Word 1                                  Word 2
|---------------------------------------|---------------------------------------|
0x0  0x1  0x2  0x3  0x4  0x5  0x6  0x7  0x8  0x9  0x10 0x11 0x12 0x13 0x14 0x15
|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|
  ?    ?    1    1    2    2    ?    ?    ?    3    4    4    3    ?    ?    ?

Cool. Now let's try loading both words in vector registers to add them together:

|----|----|----|----|----|----|----|----|
  ?    ?    1    1    2    2    ?    ?
  ?    3    4    4    3    ?    ?    ?

Well, that is a problem: they are misaligned. We can't do the vectorization here properly. Well, we can, but it would take some time to properly align the memory in order to make the vectorization easy. This is why I said memory alignment is a problem.

However, C is a really cool language, and it has some tools we can use to align the memory:

short arr1[] __attribute__((__aligned__(16))) = { 1, 1, 2, 2 };
short arr2[] __attribute__((__aligned__(16))) = { 3, 4, 4, 3 };

It is not very pretty, but it will align our arrays: we are telling the compiler to put the start of the array in any address multiple of 16. Why 16? Because 16 bytes is 128 bits - which will align our memory right at the beginning of every 128 bit word for the vector registers.

Notice that even if we are using those attributes, our data can still be misaligned. For example, if this is our operation:

for (size_t i = 0; i < 4; i++) c[i] = a[i] + b[i + 1];

We don't want a[i] to be aligned with b[i], we want a[i] to be aligned with b[i + 1]! We must keep this in mind when aligning our values.

C structs are a good way to keep variables together. For example:

struct {
  int a;
  char b;
  short c;
}

These three variables would be declared sequentially in memory. Assuming an int of 32 bits, a char of 8 bits, and a short of 16 bits, we would have 56 bits in total. We align these structs in memory so they will occupy 64 bits each. However, if we were to vectorize the sum of the member "a" from several of these structs, we would run into another problem: the member "a" of the two structs would be too far apart in memory, and we will not be able to fit more than one in the same vector register easily.

For this reason, it is recommended to have objects of arrays instead of arrays of objects if you are planning to vectorize their values: keep the values you want to vectorize close to each other.

2. Data dependencies

Say that we are doing this operation in the loop, and it will be vectorized:

a[i + 1] = a[i] + b[i];

This is a big problem: in a normal loop, we would be modifying the next value in the array, which is totally fine. However, since we will be working with a[i] and a[i + 1] at the same time, this would not be possible. Data dependencies like this can make the vectorization process too difficult or impossible.

3. Other practices

Other practices that will make life easier for the compiler to vectorize our code (but I won't spend too much time talking about) are:

  • Make the number of iterations easily countable
  • Having the loop as single entry and single exit (with no breaks, continues, etc)
  • Avoid branches, like switches and functions
  • Use the loop index ("i") for accessing the array

Vectorizing

I wrote the following application in C:

#include <stdio.h>
#include <stdlib.h>

int main()
{
    srand(1);

    int a1[1000] __attribute__((__aligned__(16)));
    int a2[1000] __attribute__((__aligned__(16)));
    int a3[1000] __attribute__((__aligned__(16)));

    int sum = 0;

    for (int i = 0; i < 1000; i++) {
        a1[i] = (rand() % 2000) - 1000;
        a2[i] = (rand() % 2000) - 1000;
    }


    for (int i = 0; i < 1000; i++) {
        a3[i] = a1[i] + a2[i];
    }

    for (int i = 0; i < 1000; i++) {
        sum += a3[i];
    }

    printf("Sum: %d\n", sum);

    return 0;
}

The idea is simple: 3 arrays, load two arrays with random numbers, add the numbers into the third array, add the numbers from the third array into a variable "sum", print the sum, exit.

I compiled the code above in an Aarch64 machine, with the flag -O3 (optimized), and here is the result in assembly:

I cleaned up the code a little bit, removing some useless comments and leaving part of the debugging source. I also added some observations to what is important.

/* Just some initialization stuff */
mov   x16, #0x2f20
sub   sp, sp, x16

/* Seeding the random generator */
mov   w0,  #0x1
stp   x29, x30, [sp]
mov   x29, sp
stp   x19, x20, [sp,#16]

/* for (int i = 0; i < 1000; i++) {    */

/*  a1[i] = (rand() % 2000) - 1000; */
mov   w20, #0x4dd3
stp   x21, x22, [sp,#32]
/*  a1[i] = (rand() % 2000) - 1000; */
movk  w20, #0x1062, lsl #16
str   x23, [sp,#48]
add   x22, x29, #0x40
add   x21, x29, #0xfe0
/*  a1[i] = (rand() % 2000) - 1000; */
mov   w19, #0x7d0
mov   x23, #0x0
bl    400510 <srand@plt>
/*  a1[i] = (rand() % 2000) - 1000; */
bl    4004e0 <rand@plt>
smull x1, w0, w20
asr   x1, x1, #39
sub   w1, w1, w0, asr #31
msub  w0, w1, w19, w0
sub   w0, w0, #0x3e8
str   w0, [x22,x23]
/*  a2[i] = (rand() % 2000) - 1000; */
bl    4004e0 <rand@plt>
smull x1, w0, w20
asr   x1, x1, #39
sub   w1, w1, w0, asr #31
msub  w0, w1, w19, w0
sub   w0, w0, #0x3e8
str   w0, [x21,x23]
add   x23, x23, #0x4
/* for (int i = 0; i < 1000; i++) { */
cmp   x23, #0xfa0
b.ne  40056c <main+0x3c>
mov   x2, #0x1f80
add   x1, x29, x2
mov   x0, #0x0
/* } */


/* for (int i = 0; i < 1000; i++) { */
/*  a3[i] = a1[i] + a2[i]; */
ldr   q0, [x22,x0]
ldr   q1, [x21,x0]
add   v0.4s, v0.4s, v1.4s /* <--- Notice the name of these strange registers! */
str   q0, [x1,x0]
add   x0, x0, #0x10
cmp   x0, #0xfa0
b.ne  4005bc <main+0x8c>
movi  v0.4s, #0x0 /* <--- WOW!!!! */
mov   x0, x1
mov   x1, #0x2f20
add   x1, x29, x1
/* } */

/* for (int i = 0; i < 1000; i++) { */
/*  sum += a3[i]; */
ldr   q1, [x0],#16
add   v0.4s, v0.4s, v1.4s /* <--- O NO!!!!!!! */
cmp   x0, x1
b.ne  4005e8 <main+0xb8>
/* } */

/* printf("Sum: %d\n", sum); */
addv  s0, v0.4s
adrp  x0, 400000 <_init-0x498>
add   x0, x0, #0x7f0
mov   w1, v0.s[0]  /* <--- SEND HELP!!!!!!!!!!!!! */
bl    400520 <printf@plt>

/* return 0; */
/* } */
ldr   x23, [sp,#48]
ldp   x19, x20, [sp,#16]
mov   w0, #0x0
ldp   x21, x22, [sp,#32]
mov   x16, #0x2f20
ldp   x29, x30, [sp]
add   sp, sp, x16
ret
.inst 0x00000000 ; undefined

If the compiler was not optimizing this code, we would see 3 loops in total, but it does not happen here: the loops get merged together, and they also get unrolled. You can see more details about this in my previous post.

The most important thing, however, are those weird registers, like v0.4s. You will never guess what that v in the name of the register stand for in this blog post about vector registers. They are vector registers! The compiler optimized the code so it would make use of vector registers to make the operations!

The name v0.4s refers to the first (index 0) vector register, dividing it into 4 lanes of 32 bits (that's the meaning of the "s") each. This means that we can fit 4 pieces of data with 32 bits each in each vector. You can find more information about this naming convention here.

Now, let's take a closer look at this part:

/* Load the Quadword 0 (128 bit register) with the content of x22 + x0 */
ldr   q0, [x22,x0]

/* Load the Quadword 1 (128 bit register) with the content of x21 + x0 */
ldr   q1, [x21,x0]

/* Add vector 0 + vector 1 (both), storing the result in vector 0 */
add   v0.4s, v0.4s, v1.4s

/* Store the content of Quadword 0 (our result) into x1 + x0 */
str   q0, [x1,x0]

And there it is. Our vectorized code.

Now, I also compiled the same code in -O0, which should NOT vectorize the code - and indeed, it was not vectorized. Here it is, notice that there are no vector registers being used here:

mov   x16, #0x2f00
sub   sp, sp, x16
stp   x29, x30, [sp]
mov   x29, sp
/* srand(1); */
mov   w0, #0x1
bl    400510 <srand@plt>

/*  int a1[1000] __attribute__((__aligned__(16))); */
/*  int a2[1000] __attribute__((__aligned__(16))); */
/*  int a3[1000] __attribute__((__aligned__(16))); */

/* int sum = 0; */
str   wzr, [x29,#12028]

/*  for (int i = 0; i < 1000; i++) { */
str   wzr, [x29,#12024]
b     4006f0 <main+0xbc>
/*   a1[i] = (rand() % 2000) - 1000; */
bl    4004e0 <rand@plt>
mov   w1, w0
mov   w0, #0x4dd3
movk  w0, #0x1062, lsl #16
smull x0, w1, w0
lsr   x0, x0, #32
asr   w2, w0, #7
asr   w0, w1, #31
sub   w0, w2, w0
mov   w2, #0x7d0
mul   w0, w0, w2
sub   w0, w1, w0
sub   w2, w0, #0x3e8
ldrsw x0, [x29,#12024]
lsl   x0, x0, #2
add   x1, x29, #0x1, lsl #12
add   x1, x1, #0xf50
str   w2, [x1,x0]
/*   a2[i] = (rand() % 2000) - 1000; */
bl    4004e0 <rand@plt>
mov   w1, w0
mov   w0, #0x4dd3
movk  w0, #0x1062, lsl #16
smull x0, w1, w0
lsr   x0, x0, #32
asr   w2, w0, #7
asr   w0, w1, #31
sub   w0, w2, w0
mov   w2, #0x7d0
mul   w0, w0, w2
sub   w0, w1, w0
sub   w2, w0, #0x3e8
ldrsw x0, [x29,#12024]
lsl   x0, x0, #2
add   x1, x29, #0xfb0
str   w2, [x1,x0]
/*  for (int i = 0; i < 1000; i++) { */
ldr   w0, [x29,#12024]
add   w0, w0, #0x1
str   w0, [x29,#12024]
ldr   w0, [x29,#12024]
cmp   w0, #0x3e7
b.le  400658 <main+0x24>
/*  } */


/*  for (int i = 0; i < 1000; i++) { */
str   wzr, [x29,#12020]
b     400748 <main+0x114>
/*   a3[i] = a1[i] + a2[i]; */
ldrsw x0, [x29,#12020]
lsl   x0, x0, #2
add   x1, x29, #0x1, lsl #12
add   x1, x1, #0xf50
ldr   w1, [x1,x0]
ldrsw x0, [x29,#12020]
lsl   x0, x0, #2
add   x2, x29, #0xfb0
ldr   w0, [x2,x0]
add   w2, w1, w0
ldrsw x0, [x29,#12020]
lsl   x0, x0, #2
add   x1, x29, #0x10
str   w2, [x1,x0]
/* for (int i = 0; i < 1000; i++) { */
ldr   w0, [x29,#12020]
add   w0, w0, #0x1
str   w0, [x29,#12020]
ldr   w0, [x29,#12020]
cmp   w0, #0x3e7
b.le  400704 <main+0xd0>
/*  } */

/*  for (int i = 0; i < 1000; i++) { */
str   wzr, [x29,#12016]
b     400784 <main+0x150>
/*   sum += a3[i]; */
ldrsw x0, [x29,#12016]
lsl   x0, x0, #2
add   x1, x29, #0x10
ldr   w0, [x1,x0]
ldr   w1, [x29,#12028]
add   w0, w1, w0
str   w0, [x29,#12028]
/*  for (int i = 0; i < 1000; i++) { */
ldr   w0, [x29,#12016]
add   w0, w0, #0x1
str   w0, [x29,#12016]
ldr   w0, [x29,#12016]
cmp   w0, #0x3e7
b.le  40075c <main+0x128>
/*  } */

/*  printf("Sum: %d\n", sum); */
adrp  x0, 400000 <_init-0x498>
add   x0, x0, #0x870
ldr   w1, [x29,#12028]
bl    400520 <printf@plt>

/*  return 0; */
mov   w0, #0x0
/* } */
ldp   x29, x30, [sp]
mov   x16, #0x2f00
add   sp, sp, x16
ret
.inst 0x00000000 ; undefined

by Henrique at October 09, 2017 02:25 AM


Saeed Mohiti

Compiled C Lab

In lab 2 we were asked to compile a C program using GCC compiler include these three options:

-g               # enable debugging information

-O0              # do not optimize (that’s a capital letter and then the digit zero)

-fno-builtin     # do not use builtin function optimizations

 

After compiling with the above options:

gcc -g -O0 -fno-builtin hello.c

then I used this command “objdump -f a.out” and I got this result:

hello:     file format elf64-x86-64

architecture: i386:x86-64, flags 0x00000112:

EXEC_P, HAS_SYMS, D_PAGED

start address 0x0000000000400400

the first line as it seems, shows the format of the file “elf64-x86-64”

the second line explain the architecture which is shows i386:x86-64 and flag is referring to the 3rd line

/* BFD is directly executable.  */

#define EXEC_P          

/* BFD has symbols.  */

#define HAS_SYMS         

/* BFD is dynamically paged (this is like an a.out ZMAGIC file) (the     linker sets this by default, but clears it for -r or -n or -N).  */

#define D_PAGED  

The “-s” option display per-section summary information and the “-d” disassemble sections containing code. in this task we need to focus on <main> section:

00000000004004d7 <main>:
  4004d7:       55                                push   %rbp
  4004d8:       48 89 e5                      mov    %rsp,%rbp
  4004db:       bf 90 05 40 00            mov    $0x400590,%edi
  4004e0:       b8 00 00 00 00            mov    $0x0,%eax
  4004e5:       e8 06 ff ff ff                 callq  4003f0 <printf@plt>
  4004ea:       b8 00 00 00 00            mov    $0x0,%eax
  4004ef:       5d                                  pop    %rbp
  4004f0:       c3                                  retq  
  4004f1:       66 2e 0f 1f 84 00 00    nopw   %cs:0x0(%rax,%rax,1)
  4004f8:       00 00 00
  4004fb:       0f 1f 44 00 00               nopl   0x0(%rax,%rax,1)

The 2nd line pushes the pointer to the stack. and the 3rd line sets the register stack pointer to the register base pointer. The 5th line move the 0 to eax, the eax is the return register.And the next line “callq”, the program orders the compiler to print the statement.  The next task is asking to compile the program with “-static” option and check the size. The size before compilation was:

10592 Oct  8 20:09 a.out   

And after running : gcc -g -O0 -fno-builtin -static hello.c the size changed to:

931696 Oct  8 20:34 a.out

The -static option, on systems that support dynamic linking, this overrides -pie and prevents linking with the shared libraries And it adds more section heading.

In this task is asking to remove the “-fno-builtin” option and the result is:

00000000004004d7 <main>:

  4004d7:       55                                 push   %rbp

  4004d8:       48 89 e5                       mov    %rsp,%rbp

  4004db:       bf 80 05 40 00            mov    $0x400580,%edi

  4004e0:       e8 0b ff ff ff                callq  4003f0 <puts@plt>

  4004e5:       b8 00 00 00 00            mov    $0x0,%eax

  4004ea:       5d                                 pop    %rbp

  4004eb:       c3                                 retq  

  4004ec:       0f 1f 40 00                   nopl   0x0(%rax)

In the next task we need to remove “-g” option which is enabling the debugging information. In this case the size of file reduces significantly:

8168 Oct  8 20:51 a.out

After that we add extra argument to the printf() to see the changes.

The output is:

00000000004004d7 <main>:

  4004d7:       55                              push   %rbp

  4004d8:       48 89 e5                   mov    %rsp,%rbp

  4004db:       48 83 ec 08             sub    $0x8,%rsp

  4004df:       6a 0a                        pushq  $0xa

  4004e1:       6a 09                        pushq  $0x9

  4004e3:       6a 08                        pushq  $0x8

  4004e5:       6a 07                       pushq  $0x7

  4004e7:       6a 06                       pushq  $0x6

  4004e9:       41 b9 05 00 00 00       mov    $0x5,%r9d

  4004ef:       41 b8 04 00 00 00       mov    $0x4,%r8d

  4004f5:       b9 03 00 00 00          mov    $0x3,%ecx

  4004fa:       ba 02 00 00 00          mov    $0x2,%edx

  4004ff:       be 01 00 00 00          mov    $0x1,%esi

  400504:       bf b0 05 40 00          mov    $0x4005b0,%edi

  400509:       b8 00 00 00 00          mov    $0x0,%eax

  40050e:       e8 dd fe ff ff          callq  4003f0 <printf@plt>

  400513:       48 83 c4 30             add    $0x30,%rsp

  400517:       b8 00 00 00 00          mov    $0x0,%eax

  40051c:       c9                      leaveq

  40051d:       c3                      retq  

  40051e:       66 90                   xchg   %ax,%ax

As we can see, it seems is starting to push and “mov” sequentially the pointers and the message locations.

And the final task we must compile the code with “-O3” option which is stand to optimization level 3 that compile the code with higher level instead of “-O0” (optimization level zero):

After the compiling the original file the result is pretty same but less lines :

0000000000400400 <main>:

  400400:       48 83 ec 08             sub    $0x8,%rsp

  400404:       bf 90 05 40 00          mov    $0x400590,%edi

  400409:       e8 e2 ff ff ff          callq  4003f0 <puts@plt>

  40040e:       31 c0                   xor    %eax,%eax

  400410:       48 83 c4 08             add    $0x8,%rsp

  400414:       c3                      retq  

  400415:       66 2e 0f 1f 84 00 00    nopw   %cs:0x0(%rax,%rax,1)

  40041c:       00 00 00

  40041f:       90                      nop

Changing the flag and option while compiling can make large changes on our object code .


by msmohiti at October 09, 2017 01:43 AM

October 08, 2017


Mat Babol

Finding my first Open Source bugs

This week on my Open Source journey, I started hunting for my first bugs to work on. I was hoping to find something simple, so I could understand how the whole process works. Once I know how to contribute, I'll start looking into more difficult bugs.

When looking for bugs, I was mostly looking for JavaScript or CSS bugs. There is a large variety of bugs available, from Python on the Mozilla Network site, to Java and C++ on Firefox Mobile Android. There was a few projects that I narrowed my search down to: Thimble, Firefox Dev ToolsRust, and rr.  Each has plenty of bugs to start with.



It's hard to understand what any of these bugs mean. Even after reading the description, I couldn't get a clear understanding on what I need to do. A lot of the bugs were also assigned to a contributor. So I kept looking for something that I felt comfortable with.

On Firefox DevTools, I finally came across something I'd be interested in. The bug [1403883] is not very difficult or breaks anything. Deep inside of the Firefox dev tools, the boundaries of a button extend to the very end of the row, instead of only over the image.


That seems like it could be an easy CSS fix, so I looked into it. It's a good first bug, explanation was clear and concise, and the author of the bug even included an easy to follow video to show what is going on. Nobody is assigned this bug either. Perfect! I asked for this bug right away.


A few days later, the bug was assigned to me. I was excited, finally my first bug to work on. I was even assigned a mentor, so if I have any questions, I have somebody to ask.


Now for my Open Source class, I had to find at least two different bugs, so the search still continued. I found one potential bug that was not assigned to anybody. The bug [3639] states that when the preference pane is open, the outer element should not be scroll-able. Basically, there are two scroll-able elements, and there should only be one. The bug does not sound too threatening. So then I asked for the bug and waited for it to be assigned to me.




In the meantime, I kept looking up at other bugs. I found an interesting Thimble bug [2140] from May 15, that nobody has claimed. A user is able to reduce the window down to a few pixel, making the window unreadable. There should be some sort of minimal width for the editor. That seems like a bug that I would like to work on, so I asked for the bug and quickly got it assigned to myself.


During my hunt for bugs, I joined a few slack groups, introduced myself, asked for any bugs that I could work on.


The developers were very welcoming. I started off by asking about my first bug [1403883] and I asked about it in the wrong channel. I was welcomed, and guided properly to the correct channel.


One developer, jlast, send me a few bugs that I could work on. 


The community was very welcoming to me, a new contributor. I was welcomed with open arms. So that's two bug that I have assigned to myself, and one potential that I am still waiting for. That gives me plenty of work to start with.I'm excited to start working on these bugs! Stay posted for an update on the next blog.

by Mat Babol (noreply@blogger.com) at October 08, 2017 04:08 PM


Sean Prashad

Success

October 8, 2017. I've made it 9 days so far in this vast new land known as "Open Source". If you are reading this, I am alive but not alone in this new adventure. With that being said should I ever go missing please see below for the password to my Mac (or maybe just a ploy to keep you reading further).

The Low-down

I have to admit that I haven't learned an enormous amount of new things (as yet), but I've walked away with 2 successfully merged PR's and some fancy new "Contributor" tags in the Rust and Crates.io repositories. I plan to document my journey in an upcoming blog post but at a high-level, my two assigned bugs focused on documentation cleanup and website branding.

What's New?

Well, if you can keep a secret, I can share a few new things that I've learned so far:

  • Not every project has auto-deployment (I'm not sure why but I want to find out)
  • Ask for help but don't be helpless! Put in the time and effort to do some research on your own before asking for guidance
  • Be more than just proactive. This has been my first opportunity to learn from a welcoming community whilst having my hand held by our Professor. Just like in industry, not everyone will have time to help and responses will be slow! I've had to take the reigns by my own hands for some of my bugs until I received guidance by project mentors
  • Removing code is not a bad thing! One of my bugs removed old documentation that may have confused other devs if they came across it! My Professor even called my specific scenario a "code deficit"

Comment, Subscribe and Like

Hey! Don't go just yet - I'm not the only one whose doing some bug-busting. Infact, most of my other colleagues have dove into Open Source projects like Thimble, Firefox Dev Tools and PDF.js. Why don't you take a look at what else is being worked on? Besides, it's Hacktoberfest and there's t-shirts to be won!!!

October 08, 2017 12:30 PM


Svitlana Galianova

The least stressful built ever

Deciding where to start my open-source developer path was maybe the hardest part of my contribution so far (I hope) 


The next step is to set up environment and run the project locally. 

Without any positive attitude or expectations I went to the link which gave me some instructions how to set all the required dependencies and start debugger.

I got Yarn, Node.js and the project itself:
git clone git@github.com:devtools-html/debugger.html.git  
So far so good!

Next step was to install all the dependencies with even more magical command: yarn install. Nothing stressful and with no pain I have successfully started the project on port 8000 with yarn start.

And that was it! 

Tip for the future: Have more courage and Murphy's law doesn't always work.

by svitlana.galianova (noreply@blogger.com) at October 08, 2017 02:27 AM

October 07, 2017


Dan Epstein

Firefox DevTools – Bug 1402394 Remarks

The bug has successfully landed and is aimed to be released in Firefox milestone 58. The process of fixing this bug can be found in my previous blog. Overall I believe this is a "good-first-bug"  to work on because it gives you the fundamentals of using the MozillaBuild start-shell commands and the basic of Mercurial.

 

I have mentioned before that I have encountered some errors along the way, such as replacing the order of the name elements in the moz.build file and some file names has not been renamed, but, this was an easy fix. Besides those, there weren't any major issues. 

The community in the forum of Bugzilla@Mozilla are genuine and respectful. I have been given the correct instructions and documentations on how to fix this bug by the reporter Mr. Brosset who has assigned the bug to me.

I have also noticed that some of my class mates have been working on a similar bug. I have offered my help on the Slack channel. As of now it seems they also finished and submitted a patch file.

by Dan at October 07, 2017 08:34 PM

October 06, 2017


Fateh Sandhu

Lab 1 – Brackets

What is it called?

The project that I decided to look at is called Brackets. Brackets is an open source editor created by Adobe.

https://github.com/adobe/brackets

What is the project about? What problem does it solve?

It is a code editor used to work on programming. It helps the user to work on the code by using a software that makes his/her life easier.

How old is it? When did it start?

The project started in 2014. That would make it about 3 years old.

Which websites are associated with it (e.g., does it have a separate site beyond Github?)

It has its own dedicated website where you can learn about it and download it.

What language(s) is it written in?

It is mainly written Javascript and HTML.

How many open Issues does it have?

As of right now it has about 1910 open issues.

How many people have contributed to the code?

Right now there are about 65 pull requests and a few thousand people have forked the project.

Who is using the project? What are they doing with it?

This project is being used by thousands of software and web developers to make their content.


by firefoxmacblog at October 06, 2017 03:11 AM


Azusa Shimazaki

Bug fixing on thimble part1: set up an environment

As I posted before (assmith2017.blogspot.ca/2017/10/bugs-on-open-source.html),
I picked two issues from "thimble" for my first bug.
For starting, I needed to set up the environment for my laptop.

I followed my professor's blog (http://blog.humphd.org/fixing-a-bug-in-mozilla-thimble/) and the official instruction of the thimble page on Github (https://github.com/mozilla/thimble.mozilla.org), but it was not easy for me as usual.

My pc already had Node.js and Virtualbox, so I needed to add "Brackets" and "Vagrant" to set up thimble.

1. Install "Git"


   

To work on my windows 10 console, I needed to install "Git" https://git-scm.com/ as an extra.
Also "Vagrant" was needed for "thimble". The installation had no problem.

 

2. Set up "Brackets (Bramble)"

After "bracket" repository forking, I tried to clone it on my PC with a command,

$ git clone --recursive git@github.com:{your-username}/brackets.git
...but the console refused it,
It seemed ssh problem.
After struggling, I cloned it from an application"github desktop".

then hit commands
$ cd brackets
$ npm install
$ npm run build
and
$ npm start
...Oh no, I got errors. Why I'm always with errors!



I didn't use "--recursive" command when I cloned the repository, was that a problem??
I was not sure, so I removed the first clone, and I found the way to download by using a console.

$ git clone --recursive https://github.com/{your-username}/brackets.git
Ok, perfect. so no complaints this time?
$ cd brackets
$ npm install
$ npm run build
and
$ npm start
....Gosh,




It seemed the problem came from a different part.

I asked help my professor. After checking an error log file, he found the error was occured by port 8000.
We tried to run "bracket" on the port 5000 by editing .js file, and it worked!
However, he recommended using port 8000, since it is the official setting.

So I needed to find out what is happening on the port 8000 of my laptop.
I found a command to check.
$ netsh http show servicestate
It showed the used ports with their details.
The port 8000 was used by...
"FREEMAKECAPTURELIBSERVICE"
 

What's that??
I googled, and I found the name came from an application "Freemake Video Downloader".
Is that on my machine...?
...Oops



Ohhh... it was totally out of my memory. But yes, I downloaded it a long time ago, and never used.
I uninstalled it, and tried the command again with port 8000.

$ npm start
It works!



ok ... set upstream.
$ git remote add upstream https://github.com/mozilla/brackets.git

3. Set up "thimble"

Now, let's touch "thimble".
I forked and cloned the repository in the same way as "bracket" (but not with --recursive option this time)

$ git clone https://github.com/{your-username}/thimble.mozilla.org.git

then,
$ cd thimble.mozilla.org$ npm install
$ npm run env
Errors!
 

On the log file, the first error showed up on verbose (=virtual box) line.


Ummm... I skipped this step an tried the next step "vagrant up" first.

$ vagrant up
Oh, it works!



now, npm start.....
$ npm start
Errors!


then, one step back and npm run env ....?
$ npm run env
Errors!


I got the same log file
I found a word "shx" on the first error line.



"shx?" I googled and I found a npm  install command
npm install shx --save-dev
 

then,
$ npm run env
Wow works!
and,
$ npm start
works!

Finally, "thimble" is on my desktop!!!


Now, I need to touch my bug part!
Let's log in!
.....?!





Oh, maybe I need "id.webmaker.org" and "login.webmaker.org" ???
I started to set up"id.webmaker.org".
Clone repo, copy environment file, npm install ..... Error !!!

After repeating this routine many times and failed to fix, I realized I can reach my bug part without login. So, I decided to forget to the login stuff.

 Now, I can see my issue place where is on a project dashboard!



And I can see editing results!



 I believe I've gotten a bit tolerance of errors since I get errors every time.
Anyway, I've reached a start point. Glad about that.

To be continued...

by Az Smith (noreply@blogger.com) at October 06, 2017 02:00 AM

October 05, 2017


Joao Rodrigues

Lab 5 - Getting a Thimble testing environment to run

Let's just begin this lab by saying although I am loving the experience, working in an open source environment can be very frustrating. Things don't work, you don't know why, and you feel yourself getting smaller and smaller as you dig deeper into directories and files that you have never seen before. I want to thank the lovely folks on Thimble that helped me through every step of the way, including of course our very own professor!

Of course, this was to be expected. My bug, which at a glance seemed easy enough, but it required that I have a testing environment setup so I could start tinkering around with the code in order to squash this big. In order to reproduce the bug, I needed to be logged in so that I have access to a view that allows me to publish a project, or not to do so, in order for it to only be saved in that account. This leads to the name of the project being saved, but not the description. As such, I dug into the README file, and followed the steps described on there to a T in order to avoid any issues...

Installing Brackets and Thimble and running both allowed me to view the code editor, but it didn't allow me to login. In order to do this, I believe I need the services up and running in my test environment. 
I went through the instructions in the README file @gideonthomas pointed to, however after days on end of trying I couldn't seem to figure out why I wasn't not able to run this command. Having already installed everything that is needed prior, when I ran
npm run env
npm install
vagrant up

 this is what I would see:

At first I thought I wouldn't bother anyone, and I would try to fix the issue on my own. Of course, this lead to nothing. Posting a comment on my bug page lead to our professor telling me the README file was actually wrong! Turns out the order of commands should have been:

npm install
npm run env
vagrant up

Surely enough that, and having Brackets running would do it right? Nope.



As you can see I still had errors because the module "is-reachable" could not be found. After a brief back and forth with our professor, and @gideonthomas, a new bug was filed for this issue, along with a temporary fix to run in the console:

npm install is-reachable

This actually fixed the issue from above and allowed me to see Thimble's code editor.



So close! All I needed to do now is click on Sign In and attempt to recreate the bug!



Both the Sign In and Log In pages did not work at all. It's back to the workshop. Our professor suggested that I go on Thimble's chat on Mattermost where I could talk to some other devs and students who had faced a similar issue before. I saw that one of the prior suggestions was destroying Vagrant and building it again. That didn't work. Then another suggestion was reinstalling both VM Virtual Box and Vagrant. That didn't work either.

This was when I realized I needed to speak up and ask for help once more. Surely enough, within a couple of minutes someone pointed me to this comment:



And as you can see:



That worked! I now have my testing environment up and running, after such a huge a huge struggle for so many days. I should have spoken up earlier and maybe I would have had this issue fixed a long time ago. This was a very good learning experience, and it marked my week with this bug.

Long story short, I made no progress within the code, other than having a few ideas in my head after looking at it in Sublime. However, I did learn to speak up, and not be afraid to ask for help because everyone so far has been so helpful and nice to me. I encourage anyone who is in the same situation as me, of feeling like they want to ask for help but can't to just do it! These folks are passionate and want you to succeed!

So to recap, this week, got a Thimble testing environment to run. Doing this lead to discovering a new bug and it being assigned, and finding that there is an issue folks are currently working on, which has a temporary fix for now.

The goal for next week will be to have made progress with the actual bug I was assigned, rather than finding new ones 😭 I will be sharing my experiences with you all on here, so stay tuned!

by João Rodrigues (noreply@blogger.com) at October 05, 2017 11:57 PM


Dan Epstein

Firefox Developer Tools – Bugzilla@Mozilla Bug 1402394

Greetings everyone! On this blog I am discussing the entire process on fixing bug #1402394 that I have found from the list of Firefox Developer Tools

BUG #1402394: Requirements

  • Firefox Build Environment installed
  • Mercurial Installed
  • Mozilla-build Installed
  • Create Bugzilla@Muzilla Account

After successfully compiling Firefox and configuring Mercurial I know that I am ready to start working on this bug. I just want to point out that it is a must to configure mercurial so when you generate a patch  it will  include your details to Bugzilla@Mozilla. I have also included the links for the bug and the documentation that is recommended to read before starting working on this.

The Process

1. The first step is to navigate to the following directory using the windows interface and Mozilla-build start shell:

 mozilla-central\devtools\client\shared\components

2. Now its time to rename the following .js (JavaScript), .css (Cascading Style Sheet) as well as, inside the sub-directories.

To rename, the following command is used so you can see the results using the "diff" command after.

  • "hg rename example-A ExampleA" 
  • The first argument is the existing file and the second is what you would like to rename to.

3. After renaming all the files in this directory and the sub-directories it's time to edit the moz.build and change the old file names.

4. Now as I have mentioned earlier I use the "hg diff" command to see the differences between the first revision and the current one. This will list all the lines I have removed and changed. This is useful because when I will be generating the patch it will list all the changes that I have made.

5. Now I must edit all the lines that has the old file names from the parent directory Mozilla-Source. In order to save time and not go trough each folder I have open the text editor and searched the whole directory with the line that start:

  • "\devtools\client\shared\components"
  • CTRL + SHIFT + F (To search all directories)
+
Lines of code replaced
-
Lines of code removed

6. Now that every file has been renamed its time to commit (save) the changes and to generate the patch! I have used Mercurial GUI but, you can also use the MozillaBuild start-shell.

Final Thoughts

The steps above shows how simple  it is to fix the bug but, it wasn't that easy as I haven't listed all the errors I have received during the process. I have encountered multiple errors:

  • Unsorted list element - I had to change the elements order in the moz.build file.
  • UI Test Fail - happens because I haven't renamed the files in the test directory.

After fixing these errors, I have submitted multiple patches through Bugzilla@Mozilla website. The reporter Mr. Brosset was very helpful and informative with his replies. This was a great experience overall, I have learned how to properly use MozillaBuild commands and Mercurial. As well the process of what happens before the patch is ready to be submitted (Must go trough multiple testing using treeherder system etc). Below is the patch that is successfully landed.

by Dan at October 05, 2017 03:34 AM


Diana Dinis-Alves

Catching Bugs: The Path to Becoming MushiKing

It’s time to start contributing to an open source project and that means finding a “good first bug”. Told to focus on Mozilla’s open source projects, the hunt began. Finding a bug from the projects we were given wasn’t difficult, but overcoming the anxiety was another story. And it eventually made finding bugs difficult.

The next open “good first” bug I saw was gonna be the one I would request. I took an open source to get involved in a community and help with beat my anxiety, it would be silly to chicken out. I opened Slack to get a link David had posted to Mozilla’s Activity Stream project but I didn’t get that far. There was a link to a new Thimble bug instead.

slackThimbleBug

So I requested it.

requestingBug.png

The bug (Issue 2515, found here) in question involves the QuickEdit icon that Thimble uses. When editing color in a css file, the icon appears, but if you change to a font file the icon won’t disappear. Minor, but annoying. Demonstrated below.thimbleBug2515.gif

Since Thimble uses Node.js, I’m hoping that this will be able to introduce me a bit to it. But ultimately, I’m hoping that working on my first open source bug will teach me how to use GitHub and become part of the open source community.

The bug seems to be a simple fix, although setting up Thimble is overwhelming. Having problems with set ups in the past (thanks BitDefender), I’m not sure how agreeable my computer will be with the set up. Taking this in mind, along with my schedule and school work, I’m hoping to have the bug done within a 1 week and a half.

Thimble can be found on GitHub. Feel free to visit the website and use it too!

Other

I’d like to get involved with Neon, but I’ve yet to work up the courage to join the Neon Slack channel.

I also decided to register for Hacktoberfest, I’d love to make four pull requests this month and receive a t-shirt. But I’ll be happy just to get stickers for participating. If you’re interested in applying for Hacktoberfest, click here, maybe you’ll be able to receive a T-shirt!


by ddinisalves at October 05, 2017 02:25 AM


Svitlana Galianova

Baby steps

I spent my last week and a few days this week trying to find the perfect first bug in order to start contributing.
It was tough...
I felt like a lost child in a big mall. You have so much choice that it becomes overwhelming!

"What if I am not able to fix that problem...", "What if I will be stuck and will not only lose some marks from a final grade but also my confidence as a programmer...", "What if I will have some issues with setting up the environment...", ""What if...", "What if..."

It seemed like I had considered every possible bad scenario.

I did not have a bug and the deadline to pick it was coming up.

I have finally got some courage to ask my professor (David Humphrey) for help. I came into his office and within a few minutes he told me where to go, what to do, whom to ask!

I got connected with Mozilla Debugger's team trough their Slack channel. I got my chance to polish my React skills and get involved in Mozilla's large and friendly community!
Almost immediately Jason Laster has contacted me and suggested to start with this bug. At the same day earlier Dave was discussing how Jason was helpful and assisted my classmate in submitting his first bug!

Even though I am still a bit confused about a lot of things, it is nice to know that there are people who are ready to hold your hand when you are doing your baby steps!


by svitlana.galianova (noreply@blogger.com) at October 05, 2017 12:53 AM


earle white

Contributing to the Mozilla code base

navigating sites like bugzilla can become really overwhelming allowing the chance of  getting started with a good first bug to become challenging and some what frustrating. most bugs seem to be limited to front end development languages; forcing me to leave my comfort zone “java”.

this is a list of some of the bugs that i tried to file a request for.

i  made the decision to choose these bugs not because i love JavaScript, but  with the hope that it might give me some experience in my journey of one day a becoming a full stack developer. in my preparation of landing one of these bugs i have began communicating with some developers via “irc” chat and stared to revise and reteaching my self JavaScript, python.


by ewhite7blog at October 05, 2017 12:23 AM

October 04, 2017


Henrique Coelho

GCC compiler optimizations for loops

The GCC compiler provides several optimization flags that, when used, will signal the compiler to try to optimize the code being produced. Using the flag -O0, the compiler will not optimize the generated code too much - this is ideal when we just want to compile the program to test it. The -O3 flag, however, will heavily optimize the output, but it is a much slower process.

In this post, I will compare some outputs of compiled programs using the -O0 and -O3 flags.

First, I wrote the following code in C:

#include <stdio.h>
#include <stdlib.h>

#define SZ 1000

int main()
{
  srand(1);

  int arr1[SZ];

  long long sum = 0;

  for (size_t i = 0; i < SZ; i++) arr1[i] = rand();

  for (size_t i = 0; i < SZ; i++) sum += arr1[i];

  return 0;
}

It's a very simple program: it fills an array with a bunch of random numbers, then sum all of them together into the variable sum.

I compiled this program with both the O0 and O3 flags, and here are the results (more specifically, the resulted assembly code for the main function):

/* COMPILED WITH O0 */
/*------------------------------------------------------------------------------
  Just some initialization stuff. Nothing to see here
------------------------------------------------------------------------------*/
72a:    push   %rbp
72b:    mov    %rsp,%rbp
72e:    sub    $0xfd0,%rsp
735:    mov    %fs:0x28,%rax
73e:    mov    %rax,-0x8(%rbp)
742:    xor    %eax,%eax

/*------------------------------------------------------------------------------
  Pushing "1" as argument and calling "srand"
------------------------------------------------------------------------------*/
744:    mov    $0x1,%edi
749:    callq  5f0 <srand@plt>

/*------------------------------------------------------------------------------
  Initializing "sum" to 0
------------------------------------------------------------------------------*/
74e:    movq   $0x0,-0xfc8(%rbp)

/*------------------------------------------------------------------------------
  FIRST LOOP
  ----------
  Here is where the first loop starts
  Initializing the first "i" to 0
  Notice that we are also jumping to line 783. On line 783 we are checking if 
  the condition is true
------------------------------------------------------------------------------*/
759:    movq   $0x0,-0xfc0(%rbp)
764:    jmp    783 <main+0x59>

/*------------------------------------------------------------------------------
  Getting the random number and recording it in the array
------------------------------------------------------------------------------*/
766:    callq  600 <rand@plt>
76b:    mov    %eax,%edx
76d:    mov    -0xfc0(%rbp),%rax
774:    mov    %edx,-0xfb0(%rbp,%rax,4)

/*------------------------------------------------------------------------------
  Incrementing "i"
------------------------------------------------------------------------------*/
77b:    addq   $0x1,-0xfc0(%rbp)

/*------------------------------------------------------------------------------
  Comparing "i" with the limit: if it was not met, jump back to line 766
------------------------------------------------------------------------------*/
783:    cmpq   $0x3e7,-0xfc0(%rbp)
78e:    jbe    766 <main+0x3c>

/*------------------------------------------------------------------------------
  SECOND LOOP
  -----------
  Here is where the second loop starts
  Initializing the second "i" to 0
  Notice that we are also jumping to line 7bc. On line 7bc we are checking if 
  the condition is true
------------------------------------------------------------------------------*/
790:    movq   $0x0,-0xfb8(%rbp)
79b:    jmp    7bc <main+0x92>

/*------------------------------------------------------------------------------
  Getting the number from the array and adding it to "sum"
------------------------------------------------------------------------------*/
79d:    mov    -0xfb8(%rbp),%rax
7a4:    mov    -0xfb0(%rbp,%rax,4),%eax
7ab:    cltq   
7ad:    add    %rax,-0xfc8(%rbp)

/*------------------------------------------------------------------------------
  Incrementing "i"
------------------------------------------------------------------------------*/
7b4:    addq   $0x1,-0xfb8(%rbp)

/*------------------------------------------------------------------------------
  Comparing "i" with the limit: if it was not met, jump back to line 79d
------------------------------------------------------------------------------*/
7bc:    cmpq   $0x3e7,-0xfb8(%rbp)
7c7:    jbe    79d <main+0x73>

/*------------------------------------------------------------------------------
  Magic that returns the function. Here be dragons. Trust me: stay away from it,
  except for one little part: notice how it is pushing 0 into eax
------------------------------------------------------------------------------*/
7c9:    mov    $0x0,%eax
7ce:    mov    -0x8(%rbp),%rcx
7d2:    xor    %fs:0x28,%rcx
7db:    je     7e2 <main+0xb8>
7dd:    callq  5e0 <__stack_chk_fail@plt>
7e2:    leaveq 
7e3:    retq   
7e4:    nopw   %cs:0x0(%rax,%rax,1)
7ee:    xchg   %ax,%ax

Great. It really looks a lot like my original code. Now, before I show the code for O3, let me tell you a tale about processors. Every time we have an operation, the resulting value is not the only thing we get back: processors also have things called flags, which they set in order to give you quick information about the result. For example: is the result equal to 0? Is it greater than 0? Did it overflow? And so on. On x86 machines, the "equal to zero" flag happens to be called ZF. These flags are what the jump commands generally use. For example, the command "jne" (jump if not equal to 0) will look at the ZF flag to decide if it should jump it or not.

Why I am saying this is a total mystery and 100% not related to the following piece of code, which was compiled with the O3 option:

/* COMPILED WITH O3 */
/*------------------------------------------------------------------------------
  Just some initialization stuff. Nothing to see here
------------------------------------------------------------------------------*/
5c0:    push   %rbx

/*------------------------------------------------------------------------------
  Pushing "1" as argument and calling the "srand" function.
  Notice what it is doing with the b register: it is pushing the value 0x3e8
  into it. What is 0x3e8? It happens to be "1000": the number of times to loop
------------------------------------------------------------------------------*/
5c1:    mov    $0x1,%edi
5c6:    mov    $0x3e8,%ebx
5cb:    callq  590 <srand@plt>

/*------------------------------------------------------------------------------
  Calling the "rand" function
------------------------------------------------------------------------------*/
5d0:    callq  5a0 <rand@plt>

/*------------------------------------------------------------------------------
  Subtracting 1 from register b, which is where the original "1000" was: every
  time it loops, it subtracts 1. The result of this operation will set the flags
  in the processor. For example: if we reach 0, it will set the ZF flag. This
  would be a very simple way to know when we finished looping. Still, totally
  not related to what happens in the next line
------------------------------------------------------------------------------*/
5d5:    sub    $0x1,%rbx

/*------------------------------------------------------------------------------
  Jumping back to the beginning of the loop if the ZF flag was set.
  Here, the compiler took step to avoid extra operations: it is not even using
  the "cmp" (compare) operation, but checking the ZF flag from the subtraction
  directly to know if the loop is done or not
------------------------------------------------------------------------------*/
5d9:    jne    5d0 <main+0x10>

/*------------------------------------------------------------------------------
  More magic to return from the function. No flags here (I think), but there
  is one more thing: instead of pushing 0 into eax, it is doing a XOR on itself.
  Any value XOR'ed with itself results in 0 - this is a quicker way to set a
  value to 0 than pushing 0 into it
------------------------------------------------------------------------------*/
5db:    xor    %eax,%eax
5dd:    pop    %rbx
5de:    retq   
5df:    nop

Just by looking at the code produced, it is clear that the compiler took a huge step to make the code more efficient: it completely eliminated a loop, and it got rid of my useless array (I wasn't using it anyway!). It also eliminated the use of the "i" variables that keep track of my loop index, and instead, just used a counter that went from 1000 to 0.

There is one last test I want to do: I noticed how the compiler eliminates the "i" variable in order to make things faster. But what if I need that variable to do something else?

This time, I compiled these programs:

#include <stdio.h>

int main()
{
  for (size_t i = 0; i < 10; i++) puts("a");
  return 0;
}
#include <stdio.h>

int main()
{
  for (size_t i = 0; i < 10; i++) printf("%d", i);
  return 0;
}

One program uses the variable "i", the other one does not. Let's check the first version (this time, I will remove some of the non-important parts of the assembly code):

/* COMPILED WITH O3 */
572:    lea    0x1bb(%rip),%rbp

/*------------------------------------------------------------------------------
  Moving "10" into the b register
------------------------------------------------------------------------------*/
579:    mov    $0xa,%ebx

57e:    sub    $0x8,%rsp
582:    nopw   0x0(%rax,%rax,1)

588:    mov    %rbp,%rdi
58b:    callq  550 <puts@plt>

/*------------------------------------------------------------------------------
  Subtracting 1 from 10 and comparing the ZF flag to jump back to 588
------------------------------------------------------------------------------*/
590:    sub    $0x1,%rbx
594:    jne    588 <main+0x18>

Again, the compiler optimized the loop by removing the "i" variable. How about in the second version?:

/* COMPILED WITH O3 */
582:    lea    0x1bb(%rip),%rbp

/*------------------------------------------------------------------------------
  Setting register b to 0
------------------------------------------------------------------------------*/
589:    xor    %ebx,%ebx

58b:    sub    $0x8,%rsp
58f:    nop

590:    mov    %rbx,%rsi
593:    xor    %eax,%eax
595:    mov    %rbp,%rdi

/*------------------------------------------------------------------------------
  Adding 1 to register b
------------------------------------------------------------------------------*/
598:    add    $0x1,%rbx

59c:    callq  560 <printf@plt>

/*------------------------------------------------------------------------------
  Comparing b to 10 and jumping if they are still different
------------------------------------------------------------------------------*/
5a1:    cmp    $0xa,%rbx
5a5:    jne    590 <main+0x10>

This time, since we need the variable "i", the compiler gave us one.

On my next post, I will talk about another strategy for optimization: vectorization.

by Henrique Salvadori Coelho at October 04, 2017 11:33 PM

GCC compiler optimizations for loops

The GCC compiler provides several optimization flags that, when used, will signal the compiler to try to optimize the code being produced. Using the flag -O0, the compiler will not optimize the generated code too much - this is ideal when we just want to compile the program to test it. The -O3 flag, however, will heavily optimize the output, but it is a much slower process.

In this post, I will compare some outputs of compiled programs using the -O0 and -O3 flags.

First, I wrote the following code in C:

#include <stdio.h>
#include <stdlib.h>

#define SZ 1000

int main()
{
  srand(1);

  int arr1[SZ];

  long long sum = 0;

  for (size_t i = 0; i < SZ; i++) arr1[i] = rand();

  for (size_t i = 0; i < SZ; i++) sum += arr1[i];

  return 0;
}

It's a very simple program: it fills an array with a bunch of random numbers, then sum all of them together into the variable sum.

I compiled this program with both the O0 and O3 flags, and here are the results (more specifically, the resulted assembly code for the main function):

/* COMPILED WITH O0 */
/*------------------------------------------------------------------------------
  Just some initialization stuff. Nothing to see here
------------------------------------------------------------------------------*/
72a:    push   %rbp
72b:    mov    %rsp,%rbp
72e:    sub    $0xfd0,%rsp
735:    mov    %fs:0x28,%rax
73e:    mov    %rax,-0x8(%rbp)
742:    xor    %eax,%eax

/*------------------------------------------------------------------------------
  Pushing "1" as argument and calling "srand"
------------------------------------------------------------------------------*/
744:    mov    $0x1,%edi
749:    callq  5f0 <srand@plt>

/*------------------------------------------------------------------------------
  Initializing "sum" to 0
------------------------------------------------------------------------------*/
74e:    movq   $0x0,-0xfc8(%rbp)

/*------------------------------------------------------------------------------
  FIRST LOOP
  ----------
  Here is where the first loop starts
  Initializing the first "i" to 0
  Notice that we are also jumping to line 783. On line 783 we are checking if 
  the condition is true
------------------------------------------------------------------------------*/
759:    movq   $0x0,-0xfc0(%rbp)
764:    jmp    783 <main+0x59>

/*------------------------------------------------------------------------------
  Getting the random number and recording it in the array
------------------------------------------------------------------------------*/
766:    callq  600 <rand@plt>
76b:    mov    %eax,%edx
76d:    mov    -0xfc0(%rbp),%rax
774:    mov    %edx,-0xfb0(%rbp,%rax,4)

/*------------------------------------------------------------------------------
  Incrementing "i"
------------------------------------------------------------------------------*/
77b:    addq   $0x1,-0xfc0(%rbp)

/*------------------------------------------------------------------------------
  Comparing "i" with the limit: if it was not met, jump back to line 766
------------------------------------------------------------------------------*/
783:    cmpq   $0x3e7,-0xfc0(%rbp)
78e:    jbe    766 <main+0x3c>

/*------------------------------------------------------------------------------
  SECOND LOOP
  -----------
  Here is where the second loop starts
  Initializing the second "i" to 0
  Notice that we are also jumping to line 7bc. On line 7bc we are checking if 
  the condition is true
------------------------------------------------------------------------------*/
790:    movq   $0x0,-0xfb8(%rbp)
79b:    jmp    7bc <main+0x92>

/*------------------------------------------------------------------------------
  Getting the number from the array and adding it to "sum"
------------------------------------------------------------------------------*/
79d:    mov    -0xfb8(%rbp),%rax
7a4:    mov    -0xfb0(%rbp,%rax,4),%eax
7ab:    cltq   
7ad:    add    %rax,-0xfc8(%rbp)

/*------------------------------------------------------------------------------
  Incrementing "i"
------------------------------------------------------------------------------*/
7b4:    addq   $0x1,-0xfb8(%rbp)

/*------------------------------------------------------------------------------
  Comparing "i" with the limit: if it was not met, jump back to line 79d
------------------------------------------------------------------------------*/
7bc:    cmpq   $0x3e7,-0xfb8(%rbp)
7c7:    jbe    79d <main+0x73>

/*------------------------------------------------------------------------------
  Magic that returns the function. Here be dragons. Trust me: stay away from it,
  except for one little part: notice how it is pushing 0 into eax
------------------------------------------------------------------------------*/
7c9:    mov    $0x0,%eax
7ce:    mov    -0x8(%rbp),%rcx
7d2:    xor    %fs:0x28,%rcx
7db:    je     7e2 <main+0xb8>
7dd:    callq  5e0 <__stack_chk_fail@plt>
7e2:    leaveq 
7e3:    retq   
7e4:    nopw   %cs:0x0(%rax,%rax,1)
7ee:    xchg   %ax,%ax

Great. It really looks a lot like my original code. Now, before I show the code for O3, let me tell you a tale about processors. Every time we have an operation, the resulting value is not the only thing we get back: processors also have things called flags, which they set in order to give you quick information about the result. For example: is the result equal to 0? Is it greater than 0? Did it overflow? And so on. On x86 machines, the "equal to zero" flag happens to be called ZF. These flags are what the jump commands generally use. For example, the command "jne" (jump if not equal to 0) will look at the ZF flag to decide if it should jump it or not.

Why I am saying this is a total mystery and 100% not related to the following piece of code, which was compiled with the O3 option:

/* COMPILED WITH O3 */
/*------------------------------------------------------------------------------
  Just some initialization stuff. Nothing to see here
------------------------------------------------------------------------------*/
5c0:    push   %rbx

/*------------------------------------------------------------------------------
  Pushing "1" as argument and calling the "srand" function.
  Notice what it is doing with the b register: it is pushing the value 0x3e8
  into it. What is 0x3e8? It happens to be "1000": the number of times to loop
------------------------------------------------------------------------------*/
5c1:    mov    $0x1,%edi
5c6:    mov    $0x3e8,%ebx
5cb:    callq  590 <srand@plt>

/*------------------------------------------------------------------------------
  Calling the "rand" function
------------------------------------------------------------------------------*/
5d0:    callq  5a0 <rand@plt>

/*------------------------------------------------------------------------------
  Subtracting 1 from register b, which is where the original "1000" was: every
  time it loops, it subtracts 1. The result of this operation will set the flags
  in the processor. For example: if we reach 0, it will set the ZF flag. This
  would be a very simple way to know when we finished looping. Still, totally
  not related to what happens in the next line
------------------------------------------------------------------------------*/
5d5:    sub    $0x1,%rbx

/*------------------------------------------------------------------------------
  Jumping back to the beginning of the loop if the ZF flag was set.
  Here, the compiler took step to avoid extra operations: it is not even using
  the "cmp" (compare) operation, but checking the ZF flag from the subtraction
  directly to know if the loop is done or not
------------------------------------------------------------------------------*/
5d9:    jne    5d0 <main+0x10>

/*------------------------------------------------------------------------------
  More magic to return from the function. No flags here (I think), but there
  is one more thing: instead of pushing 0 into eax, it is doing a XOR on itself.
  Any value XORed with itself results in 0 - this is a quicker way to set a
  value to 0 than pushing 0 into it
------------------------------------------------------------------------------*/
5db:    xor    %eax,%eax
5dd:    pop    %rbx
5de:    retq   
5df:    nop

Just by looking at the code produced, it is clear that the compiler took a huge step to make the code more efficient: it completely eliminated a loop, and it got rid of my useless array (I wasn't using it anyway!). It also eliminated the use of the "i" variables that keep track of my loop index, and instead, just used a counter that went from 1000 to 0.

There is one last test I want to do: I noticed how the compiler eliminates the "i" variable in order to make things faster. But what if I need that variable to do something else?

This time, I compiled these programs:

#include <stdio.h>

int main()
{
  for (size_t i = 0; i < 10; i++) puts("a");
  return 0;
}
#include <stdio.h>

int main()
{
  for (size_t i = 0; i < 10; i++) printf("%d", i);
  return 0;
}

One program uses the variable "i", the other one does not. Let's check the first version (this time, I will remove some of the non-important parts of the assembly code):

/* COMPILED WITH O3 */
572:    lea    0x1bb(%rip),%rbp

/*------------------------------------------------------------------------------
  Moving "10" into the b register
------------------------------------------------------------------------------*/
579:    mov    $0xa,%ebx

57e:    sub    $0x8,%rsp
582:    nopw   0x0(%rax,%rax,1)

588:    mov    %rbp,%rdi
58b:    callq  550 <puts@plt>

/*------------------------------------------------------------------------------
  Subtracting 1 from 10 and comparing the ZF flag to jump back to 588
------------------------------------------------------------------------------*/
590:    sub    $0x1,%rbx
594:    jne    588 <main+0x18>

Again, the compiler optimized the loop by removing the "i" variable. How about in the second version?:

/* COMPILED WITH O3 */
582:    lea    0x1bb(%rip),%rbp

/*------------------------------------------------------------------------------
  Setting register b to 0
------------------------------------------------------------------------------*/
589:    xor    %ebx,%ebx

58b:    sub    $0x8,%rsp
58f:    nop

590:    mov    %rbx,%rsi
593:    xor    %eax,%eax
595:    mov    %rbp,%rdi

/*------------------------------------------------------------------------------
  Adding 1 to register b
------------------------------------------------------------------------------*/
598:    add    $0x1,%rbx

59c:    callq  560 <printf@plt>

/*------------------------------------------------------------------------------
  Comparing b to 10 and jumping if they are still different
------------------------------------------------------------------------------*/
5a1:    cmp    $0xa,%rbx
5a5:    jne    590 <main+0x10>

This time, since we need the variable "i", the compiler gave us one.

On my next post, I will talk about another strategy for optimization: vectorization.

by Henrique at October 04, 2017 11:33 PM


Yankai Tian

Bug not bug

Finding bugs on Bugs Ahoy! and try to assign it, working with community and solving the bug, I had to say, I never did this before.

This was totally new to me, and tough to me. The reason is that I don’t trust myself that I have enough ability and knowledge to solve a bug in real life.

I assigned the bug 1080232, which is about android stuff. I used to go for a bug in RUST, but I’m afraid that I have no experience on RUST thus I cannot contribute it.

Anyway, that was my first time. Feeling panic right now. 😦


by ytian38 at October 04, 2017 09:45 PM


Sofia Ngo-Trong

Building the glibc library in Linux

Last post, we built a random open-source software package on Linux. Today, we will be building glibc, the GNU C Library package. This library project provides the core libraries for the GNU system and many other Linux-based systems, and it provides many critical APIs that provide foundational Linux functions. It provides “many of the low-level components used directly by programs written in the C or C++ languages” and indirectly by other programming languages such as “C#, Java, Perl, Python, and Ruby”. The project was started around 1988, and every 6 months there is a new release. It is maintained by a community of developers, and it is designed to be a “backwards compatible, portable, and high performance ISO C library”.

Clearly, it is a very important package.

In the project’s “Get Started” page, there is a lot of practical information on how we can get, build and test the libraries.

So, let’s apply the steps.

First, we will get the latest released version of the package. All of the tar files can be found at http://ftp.gnu.org/gnu/glibc/ . The latest version file is glibc-2.26.tar.gz, which is 28 MB large. We will download this into a local directory via FTP. I run:

wget http://ftp.gnu.org/gnu/glibc/glibc-2.26.tar.gz

Now, we have to extract the file:

tar -zxvf glibc-2.26.tar.gz

Now, we enter the project directory, and here are the contents:

glibc.PNG

We need to build the software without actually installing it, because we do not want to destroy to currently installed version within the Xerxes system at Seneca. This project page contains clear basic info about how to go about this, and the project’s INSTALL file contains very detailed info.

To build glibc without installing it, we just do the standard configure, then make.

According to the INSTALL file, we cannot compile glibc in the source directory; we have to build it in a separate build directory, which would be on a parallel level as the source directory. So I will back out of the source directory, and I will create another directory to put the object files in. This will allow us to remove the whole build directory in case of an error, and then we can just rebuild it again on a clean slate without affecting the source code.

So I will make this parallel build directory in one directory level above where the source files are held.

mkdir glibc-build

And now I enter this directory. Now we have to run the configure script that was in the source directory. So, keeping aware of the relative path of the configure script to our current location, we run the script, and the project web page advises us to use the switch –prefix=/usr, which means that we will create a glibc that will “load and use all configuration files from the standard locations”. There are also a bunch of other switches we could use, such as the option to enable add-on packages as part of the build.

So, from my new build directory, I run:

../glibc-2.26/configure –prefix=~

And now my directory has these items:

after config

The config.make file was generated with our system parameters.

configmake.PNG

Now, we will build the software:

make

A lot of output comes out. The INSTALL file says that we can expect this process to last several minutes, and in terms of error messages, we could just look out for ‘***’ to see if there was a serious error.

After roughly 15 min, it looks like the build finished. Here are the final lines of the output:

final lines

Hmm.. Well I didn’t notice any ‘***’… So perhaps it’s built successfully?

The build directory now contains all of these files:

ls.PNG

Okay, that’s a lot of objects.

Now, we can run ‘make check’ which the INSTALL file says will run test programs which use some of the library facilities. If an error comes from that, we should not use this built library, and we are encouraged to report the bug if it’s not already known.

make check

After what feels like an eternity (maybe half hour?), here are the test results:

testresults.PNG

So now, we need to prove that we are able to use this build of the glibc library, as opposed to the one that’s already installed in the system. We will botch the source code an introduce a blatant bug in this build version of the glibc, and then we will make a program that utilizes that code.

Looking at the manual for glibc, I found a function that would be interesting to mess around with. The strcat function is a string concatenating function that is declared in the header file “string.h”. I guess it would be fun to make it concatenate other stuff whenever we call it?

So in our source directory, I go into the string folder, and I see my target code: strcat.c . Muahaha. Time to wreak havoc.

Here is the original source code:

strcat source code.PNG

And I have tampered with it like so:

owned.PNG

Maybe a bit abrasive… But it will get the bug point across.. 😀

Here is my c program that will utilize strcat :

concatMe.PNG

Now, using the system glibc install, after we compile this program and run it, we get:

helloworld.PNG

Good, working as it should.

To test our own app, using this build of glibc, the guide says that we can run: ./testrun.sh /path/to/app

where the testrun.sh script is located in the build folder. So let’s do that. I run:

./testrun.sh ../concatMe

However… I get the exact same result. *scratches head *.

I spend the next several hours debugging. I tried removing the build file, and reconfiguring with different variations of arguments for the –prefix switch, such as the absolute path for my user account, the full path to the glibc-build folder, any others. Still didn’t work. I tried changing my tampered code around so that it would just copy a string right into the destination variable. Still didn’t change it. I looked at other students’ labs. I was following the exact same procedures as them, but yet, my tampered code didn’t show up.

The last modification I did to the source code for strcat is:

strcat.PNG

I reconfigured the build and ran make on it again. Still no change in code. At this point I was getting desperate, so I tried to botch another source file to check whether it was perhaps just the strcat function. So I went into the source directory’s stdlib folder, and modified the rand.c file. Instead of it returning a random number, I made it return just 0.

I added some code in my test c program that would utilize the rand() function:

rand

Now I compile my program, and just run it normally:

random program.PNG

Works as expected under the official install.

Now let’s try to run with my build.

./testrun.sh ../concatMe

random 0

Eureka! It works!!! I mean, it doesn’t work! I mean, the bug is working! You know what I mean.

Now why on earth did my other bug in strcat not work…

I tried doing some research, and in the glibc manual online, in the section for strcat, I saw this excerpt:

strcat info.PNG

The manual for strcat described that due to the function having to determine the length of the strings, there could be unexpected results when the string sizes overlap each other, and that generally, we should avoid this function. Hm. Good to know.

At least we know that I am definitely using my build of the glibc library from the example with the rand() function. That is the goal for today. Understanding how to work with strcat… could be left for another day, hehe…

 

 

 


by sofiangotrong at October 04, 2017 09:39 PM


Ajla Mehic

First Open Source Bug

For my first bug, I decided to work on Thimble, Mozilla’s online code editor.

3.png

I have used Thimble a few times, so I thought it would be interesting to be able to contribute to it. At first I was nervous when searching for bugs because I didn’t think I would find one that suited my abilities. Thankfully, there were many issues labeled “good first bug” and I was able to find something there.

The bug that I chose was this one, where the text in the input field when renaming a file is cut off. The bug was originally said to occur on Mac, but when I tried reproducing it I found that it happens on Windows as well. I also tried different browsers and found that it only happens in Firefox. Here is an example of what it looks like in Chrome vs Firefox:

I decided on this bug because it doesn’t seem too complicated and it looks like something I might be able to do. And so, I asked if I could work on it. I almost immediately received a friendly response and was assigned to it. For now, I’m not sure how long it will take but I am excited to get started and learn more from this experience.


by amehic at October 04, 2017 09:31 PM


Marco Beltempo

Mozilla Thimble: First Open Source Bug

 

Bug Hunter

After having the chance to test out the waters of building an open source project (Building Firefox Source) , it was time to branch off (git pun intended) and begin researching other open source projects that we would like to learn more about and contribute to.

During the first year of my program, we were taught the basics of HTML, CSS, and JavaScript.  I began using an online editor called Thimble. Developed by the Mozilla Foundation in partnership with CDOT at Seneca College, Thimble is an online code editor that allows users to easily create, hack and publish their own pages, display live changes, all in a user friendly interface. I was interested in learning more about Thimble and figured it would be a great start for my first bug.

Thimble uses GitHub as their version control platform. To simplify the search, the issue tracker has a  tag which sorts bugs more suitable for beginners. After searching through the issues page, I came across Issue#1918: Change Publish popup title when project is Live.

image

To summarize:
  • user updates existing live project
  • the publish popup title always says “Publish your Project”, although it’s already been published
  • FIX: change it to “Your Project is Online” when the project is published.

–UPDATE-10/5/2017–
-I came across a visual bug which caused content to overflow on the main screen.
-filed my first issue#2518
-will follow up in a separate post

Put a Claim on It

The last thing you want to do is spend a bunch working on a bug, only to find out it has already been assigned to someone else or closed. It’s important that you scan through the comments sections to check the history as well as the Assignee’s status to make sure someone else wasn’t already working on the issue. If everything looks good, claim the issue for myself.

The issue was first opened on March 27 , a user by the name of rkgupta21 had taken on the bug  around the same time. There has been no activity since April 16,  so I decided to leave a comment asking them if it’s okay to take over the work.

Within minutes I had received confirmation from flukeout that I could begin working on the issue.

 

Ready, Set, Error!

After being assigned my first bug, I began setting up my Thimble build environment. This required downloading a few dependencies and forking the Brackets and Thimble repositories. The automated installation process was execute using npm and vagrant tools. After my experience with building Mozilla’s Firefox, I couldn’t believe how smooth this setup process was going. Within the first hour I was able to successfully build and run a local Thimble development server with full functionality…or so it seemed…

If we refer back to Issue#1918,  this requires a user to be logged in and re-publish an existing project. Thimble use’s  Mozilla’s publish.webmaker.org API in order to store users, projects, and files. For security and authentication Thimble uses  login.webmaker.org and id.webmaker.org.

While trying to execute the login/sign-up features the local server continuously return a blank page with little to console feedback.

When attempting to login/sign-up the server returns a 404 error.

 

I joined the Mozilla Thimble community on their Mattermost channel

 

Members were quick to recommend individuals that would be more familiar with the topic

Unfortunately until I get this error solved I wont be able to get started on this bug. I will continue to update the list below with the build status and bug progress.

Build Status
  • 10/5/2017
    • will attempt separate build environment on a linux machine(TO-DO)
  • 10/4/2017
    • error when trying to launch id.webmaker services
      • uninstall/reinstall VMBox, Vagrant (NO FIX)
  • 10/3/2017
    • error when trying to launch id.webmaker services
      • vagrant destroy + vagrant up (NO FIX)

Bug Status
  • 10/8/2017

Reflection

Considering that this is my first time working on an open source project, the nerves were definitely there. When learning a new subject or taking on a large task,  I can quickly become overwhelmed when looking at the process as a whole.

One thing that’s important to remember is, EVERYBODY has to start somewhere.

Realistically in the case of open source, nobody has a complete understanding of the project inside and out. Successful projects aren’t driven entirely by funding, but by a strong community that’s willing to work together in finding solutions and creating new ideas. What you may be an expert in on end can be the solution to someone’s problem on another.

Although a lot of this is new and unknown to me, I am excited to be gaining experiencing and building my knowledge of the open source world. Also, having the opportunity to collaborate with a great community of contributors.

by Marco Beltempo at October 04, 2017 02:54 PM


Jiel Selmani

Finding A Bug Is Harder Than You Think

Firefox DevTools

It's not easy looking for a bug. No, really.  With all of these issues that are all over GitHub, looking to be solved, it took me all day to figure this out.  

I started with my interests which included virtual reality, so I decided to head over to the A-Frame repository to get started.  Head over to the Issues tab and look for some issues with the easy label and come across a really interesting one.  To discuss it quickly, the issue was related to virtual hands showing up even when no controllers were connected resulting in a poorer user experience.  I did exactly what I had to do, and left a comment to see how I could help.  Someone else had taken it, but I wanted to try in the event that they couldn't finish it.  Check it out below.

A response! Welcome to Open Source!

The contributor who took it over responded and I was excited to start interacting in the open source space.  He was welcoming as well which was great.  However, as you can tell, I also referenced a merged commit that makes it look like the issue is closed.  By doing so, I commented on the commit in hopes to get an answer in the event that I actually completed the work and it was deemed unusable.

Generally, this wouldn't bother me but because of our time constraints for the open source course I'm taking I wanted to make sure I could be assigned something I could work on and continue.  Below is my comment on the merged commit.

Still nothing. :(
Since I wasn't able to get an answer I decided to move on but keep my eyes and ears on this in the event I could eventually work on it.  I started taking a look on Bugs Ahoy! and found one I could also work on.  I'm pretty sure we went over it in class too so I did exactly what I did last time in hopes of getting an assigned bug that I could start right away.


Pretty sure I've seen you before.
All the enthusiasm!
I understand that everyone in this industry works hard because it really is a fast-paced environment.  No one is going to just sit and wait by the forums for someone to comment.  At this point, I was getting nervous about not being able to find something I was interested in working on, but...

Like a million unicorns appearing out of nowhere, a miracle occurred.  Dave Humphrey came through with a post in our Slack channel.  A beautiful link to a Twitter page....in all of its glory.


Dave Humphrey (@humphd) to the rescue!
Funny enough, Dave and I discussed different topics when I first introduced myself to him and he brought up DevTools after I explained my desire to work on items that are the unsung heroes of software development.  I got on this right away to make sure I didn't miss out.  



After I found it and read what the bug entailed, I got right in the comments and after being assigned to the issue, I was asked to join the Slack channel to get started.  What I like about this team and environment is how transparent it is.  In their documentation, they state how they will mentor a new contributor if it's their first PR.  No joke.


I joined Slack and introduced myself, only to be greeted by Jason Laster himself.  When I asked a relatively simple question that I could have answered through trial and error, he responded without hesitation.


Now, I have the project cloned and am ready to get started.  I had to install a new dependency management system, Yarn, and with the updated Node version I am ready to get started.  I'm a little nervous about it, simply because I want to do a good job to prove to myself that I can pick up something new and be able to contribute.  It's a little intimidating because you don't know the reaction you'll receive but being a confident people person who practices humility, I'll brave the new territory with pride.

Let's do this. 

***UPDATE #1***

I heard back from Patrick Brosset for the CamelCase bug where it was assigned to me.  I will gladly welcome it in order to get more comfortable. :) 

***UPDATE #2***

Since I never linked to any of my bugs directly, I'd like to do that now.  Dave also thought it was a much better idea and I agree with him.  Also, I have had my first PR merged into the debugger.html project which is a phenomenal feeling.  Anyway, here they are:

https://github.com/devtools-html/debugger.html/pull/4230
https://bugzilla.mozilla.org/show_bug.cgi?id=1402387

by Jiel Selmani (noreply@blogger.com) at October 04, 2017 12:38 AM

October 03, 2017


Hans van den Pol

Finding my first open-source bugs

This week we started off the project. The first assignment was to choose 2 bugs you’d like to work on. In this blog I will tell you which bugs I chose and why.

Which bugs did I choose?

First of all, I wanted to work with Java, JS or HTML. After searching for unassigned bugs on the ‘Bug ahoy’ page for a while, I found 2 interesting ones:

Bug 1 – Firefox for Android

source

The first bug is about removing deprecated StringHelper functions and replace it with new StringHelper functions. This code is for Firefox for Android and is written in Java. I chose this bug, because I like to work with Java. I am hoping to learn more about Java combined with Android.

Bug 2 – API request in Firefox

source

This bug is about the default header in API request which has to be changed to accept JSON files. I chose this bug, because I’m not too common with API’s. I would like to learn more about the GET/POST requests and how certain headers are handled in the browser.

Challanges

Because this is my fist time working on a open-source bug/project, I think I will be quite challenged. I’m going to work with different subject like: new technologies, other programming languages and GIT. Therefore, I need to study a lot to be able to complete my bug(s).

 

 

 


by opensourcetoronto at October 03, 2017 08:09 PM


Fateh Sandhu

Found a bug!

Finding a Bug is easy but….

Finding a bug you want to fix is easy to find on Github since there are so many of them. But the problem is to find the right one that you can work on it. After trying to find bugs in Bugzila, I had a hard time trying to find an open bug and then get a reply back.

Then I decided to switch over to GitHub in order to find something for me. Within 30 minutes of trying to find something I got two replies back from the contributors.

Which bugs did you choose? Include links to them.

Screen Shot 2017-10-03 at 15.37.08.png

I found two bugs to work on. The first is a Portable Document Format (PDF) viewer that is built with HTML5 called pdf.js. It is supported and contributed by the Mozilla labs. For this bug I have to remember the view position after refreshing the page. For example if the user refreshes the page while scrolling the pdf file and they refresh the page, they should be able to return to the exact same location after the refresh.

https://github.com/mozilla/pdf.js

Screen Shot 2017-10-03 at 15.37.25.png

The other one that I picked is Thimble. Thimble is Mozilla’s online code editor built into the browser. The bug is to add the functionality of “Add the embed snippet to the thimble publishing dialog”

https://github.com/mozilla/thimble.mozilla.org/issues/1543

Why did you choose these bugs? What was it about these projects that interested you?

The reason I chose the Thimble bug was because it is based on the Brackets text editor. I have been familiar with that software for a while now. Also the functionality that needs to be added seems to be important and yet very easy and simple to add compared to some of the more complicated things in this project.

The reason for picking the pdf.js bug was because it seems to be a very good first bug for me. I feel I can tackle something of this scale and actually make a difference instead of just trying.

What are you hoping to learn by working on these bugs?

By working on these bugs I’m hoping to get a real feel about working with large projects. I also hope I can learn a few new programming techniques while working on these.

What are you nervous about?

I am nervous about working with something that has real life consequences. With something that is in the industry has direct consequences without a safety net. It is also nerve-racking to have a fear of failure, but I guess that is normal when trying to do something new.

 

Where do the developers on the project(s) you chose communicate online? What happened when you introduced yourself? Were you greeted warmly, ignored, or met with hostility?

The developers on this project are all on GitHub which makes it very convenient for the contributors to talk and communicate to each other. When I introduced myself I was greeted with respect and warmth. I wasn’t ignored or looked down at. Also, the responses were prompt and easy to understand.

 

What’s your guess as to how long it will take you to solve this bug? Later, you’ll be able to compare this estimate to the reality of what it really involved.

I have a feeling that it shouldn’t take that long to fix this bug. But with everything it never goes as planned. But I’m hoping for the best.


by firefoxmacblog at October 03, 2017 07:18 PM


Eric Schvartzman

Atom vs Sublime 3

Getting To Know Your Code Editor

Both Atom and Sublime are great text editors for writing software code designed for the web. They are both customizable, cross-platform, and easy to use. I like using both text editors but I tend to favour Atom more so than Sublime 3. This blog isn't so much about which code editor is better since that is more aligned with personal preference. Instead I will be discussing the differences between Atom and Sublime 3 regarding their interfaces. After I compare each code editor I will discuss why I prefer to use Atom and explain how to install 2 of the following packages for it:
  • Atom IDE
  • TypeScript + JavaScript IDE language

Indentation Size

One of the first differences I noticed between Atom and Sublime was the way in which you modify the settings of the code editor. Atom relies more on a user interface to change settings, whereas in Sublime you have to open files and insert lines of text. The first setting I changed was the number of indentations the text editor applies when pressing the tab button. This is a very useful setting to know how to change because each software company writes their code base with a different format of white spacing, indentation, etc. In order to modify the indentation in Atom, you have to go to the settings panel. I've attached a video snippet showing how this can be achieved:



The same setting can also be changed in Sublime 3, but the process is different. In Sublime 3 to change the indentation size you have insert a key-value pair inside of the settings file. You can see how this is done by watching the following video snippet:



Installing Packages

After modifying the indentation size I wanted to know how I could install new packages. In Atom this process is very simple due to the interface. In order to install a new package you have to go to settings > packages, and then search for a package you would like to install. Below is a video snippet demonstrating what this looks like:



In Sublime 3 the process for installing packages is a little less intuitive compared to Atom, but nonetheless it is still very simple. In the preferences tab you have to go to Package control, and when the text box appears you have to type in "install package". This video will show you what the process will look like:



Key Bindings

The very last thing I wanted to figure how to do was to change the key bindings in case I decide to customize the commands in my code editor. In Atom all you have to do is go to the settings panel and then click on the key bindings tab. Once you are there you have to copy the keybindings you want to change and paste in them inside the key map file. The interface for the key bindings will look like so:



In Sublime 3 the process of changing the key bindings is similar to Atom, but with less steps. You just go to the preferences tab and one of the options will be key binding. Once you open the key map file you will see a list of key bindings that you can make changes to:



Why I Choose Atom

My favourite code editor out of the two is Atom due to it's ease of use and nice looking interface. Because it was created by GitHub, Atom is an open source software that is supported by a large online community that actively maintains the code base. I like the visual design of Atom compared to Sublime; it has a sleek and modern feel to it. Atom also has more packages that you can install compared to sublime. As of September 26, 2017 Atom has 6,804 packages, whereas sublime offers only 4,298 packages.

Earlier I mentioned that I would explain how to install 2 different packages. The first package I will go over is the Atom IDE.

Atom IDE

To install the Atom IDE package you have to search up atom-ide-ui in the install interface.



Once the Atom IDE is installed you will now have access to new powerful features, such as code formatting, code highlighting, diagnostics, and more. The diagnostic feature is something I really like because it has some cool functionality, such as the outline view for displaying variables declared in a file, a diagnostic pane that shows you a list of all the errors and warnings you have in a file, and also a pop up error box that displays whenever you hover over an erroneous piece of code. This is what the diagnostic feature looks like:



TypeScript + JavaScript IDE Language

In order to use the diagnostic feature in the Atom IDE you have to install some IDE language packages. There are several languages that work with the Atom IDE:
  • TypeScript & JavaScript
  • Flow
  • C#
  • Java (Java 8 runtime required)
  • PHP (PHP 7 runtime required)

Final Thoughts

Overall I had a positive experience working with both Atom and Sublime 3. Although I prefer to use Atom, Sublime is still a good code editor to use. Sublime has been around longer and it has a larger user base, with over 12.8 million users in total. My favourite part was installing the Atom IDE package because it provided an extra feature that is very useful for web development. The diagnostic feature that comes with the Atom IDE package makes it easier to find bugs in JavaScript files and it speeds up the time it takes to fix those bugs by allowing users to click on the location of the error in the diagnostic pane.

by Eric S (noreply@blogger.com) at October 03, 2017 05:10 PM

Building and Modifying Firefox from Source Code


Building Firefox

(This blog post is not intended to be a step by step guide on building Firefox from source. Instead you can find detailed instructions here)

This week in my class for "Topics in Open Source Development" I learned how to build the Mozilla Firefox browser from it's source code. This process required several prerequisites, and since I built Firefox using a Windows machine the requirements were as follows:

  • 64-bit version of Windows 7 or later
  • About 40 GB of free space on your hard drive
  • Visual Studio 2015 or 2017 (each version requires a different configuration setup)
  • Rust (a system programming language)
  • MozillaBuild package from Mozilla

After the pre-requisites were setup on my laptop, the next step was to download the Mozilla Firefox source files. There are several ways of doing this; I followed the popular way, which was to use the Mercurial version control system. In the MozillaBuild shell I executed the command:


hg clone https://hg.mozilla.org/mozilla-central

From there it took around 40-50 minutes to complete the download. Below is an image of what it might look like



After the download was complete, the next step was to build the Mozilla Firefox browser from the command line. The two commands needed to build Firefox are:

mach bootstrap
mach build


The command mach bootstrap allows you to pick which version of Firefox you want to build, and it also allows you to modify Mercurial in order to customize/enhance your experience. The initial output on the command line will look something like this:

For the second command mach build this is what builds the actual Firefox browser. This process took about 40 minutes to complete. The image below will show what to expect:


Modifying the Source code

The exciting part started here because this is when I was able to make modifications to the source code for the Firefox browser. There are so many files and directories that it can be daunting to know where to start! Fortunately I was given a suggestion on where to begin, and it was to modify the browser.js file. This file is located in the directories browser/base/content/. In that file I changed the way the browser behaves whenever you open a new tab. By opening a new tab the browser would load the website YouTube. I was able to accomplish this by making changes to the function BrowserOpenTab(). Inside the scope of BrowserOpenTab() there's another function called openUILinkIn() that gets executed at the end, and by passing in the URL of YouTube as an argument it will tell the browser to load the page of the new tab with the URL specified. Below is an image of the modification I made to the function (I outlined the value of the argument to the function I modified):



I also made modifications to the visual apperance of the browser to match the theme of the website YouTube. I changed the background colour of the horizontal bar that contains the navigation tabs. I also changed the border colour of each individual nav tab to white, and as a result the browser has more of a colour theme that compliments YouTube's website. Below is a gif of the final result:




Final Thoughts

Overall I found the entire process of building and modifying Mozilla Firefox from source to be a long but rewarding process. The build process took a long time to complete, but learning how to modify the source code of Firefox took even longer because I didn't have an API to follow. I had to learn how to scan through the code to find the correct function to modify, and through trial and error I was slowly able to produce an end result that I was satisfied with. I had initial help from my Professor David Humphrey who made suggestions on how to properly build Firefox, as well as where to begin modifying the source code. For anyone interested in looking to contribute to the Mozilla project I would highly suggest going through the same process of building/modifying from source. It's a great way to gain valuable experience in browser development and it can be very enjoyable. Besides, if you use a browser everyday as a programmer why not customize it to your preferences!

by Eric S (noreply@blogger.com) at October 03, 2017 05:10 PM


Steven De Filippis

Researching Bugs in Mozilla Products

Looking for bugs that you can contribute towards is not as easy as one would think. While there is an issues page, the available number of bugs are limited and often being worked on and resolved by others. It is worth pointing out that the bugs posted here are *known* bugs. There definitely are bugs that are either unfound at the moment or undocumented (for whatever reason).

In regards to the bugs I’ve selected on the issues page for PDF.js, I came across this one: Error using URL object in safari

It seems specific to Safari (and possibly IE9). The browser seems to relocate to a regular expression URL instead of parsing the parameter that the user was trying to.

I chose this bug as cross-platform consistency is something I always strive to keep in regards to stability. When bugs are platform-specific, it just affects a percentage of users and usually requires the developer utilizing said library/framework to find a workaround for those users.

I hope to learn about the various differences Safari and Chrome share in this respect.

I have already found the issue with the browser location being redirected to a regular expression. It seems to be related to code here.

I imagine it will take only a few hours to narrow down the underlying cause of this bug. In the meantime, i’ve been reading online to see if there are any other potential other bugs that have related issues under Safari and regular expressions.


For my second/fallback bug, I have chosen: add paper size to document information #6990

Like the bug above, this one is dependent on pdf.js. It seems third party PDF viewers (i.e Adobe Reader) can parse a PDF and obtain the relevant Page Size information. pdf.js currently lacks this functionality and may be trivial to implement.

For this particular bug, I would need to ensure that I read-up on how PDF documents are formatted. A lot of relevant information in regards to this can be found here. I will certainly be reading up on this, if I choose to proceed with this bug.

By doing so, I hope to learn a lot about the PDF architecture and its underlying functionality that it provides to pdf.js


In terms of reaching out to developers, pdf.js devs can be found via Mozilla’s IRC at: irc.mozilla.org:6667 at #pdfjs. I have already joined the channel and will likely be using it to clarify things prior to issuing a pull request upon fixing the  issue.

by Steven at October 03, 2017 02:46 PM


Joshua Longhi

Contributing to open source

The bug I chose to fix was on a rust project called Bindgen. Bindgen is a rust library that automatically converts C or C++ headers into rust bindings. Essentially it converts the header files into usable rust that can be called. The project has had 143 contributors and can be found here https://github.com/rust-lang-nursery/rust-bindgen.

The bug I chose can be found here https://github.com/rust-lang-nursery/rust-bindgen/issues/1040#issuecomment-332820423 and it is essentially a bug where when converting unsigned long long integers you get an error because bindgen automatically treats all integers as signed and converts numbers that are too large for signed integers into a negative (2’s compliment) and tries to assign it to an unsigned variable afterwards. I chose this bug because I wanted to learn rust.

To solve this bug I am going to need to familiarize myself with rust and then learn this entire project and even then I am going to need to pinpoint the bug. I think this should take be 2 or 3 weeks. I am nervous I am not going to be able to solve the bug and the problem at hand is not as easy as stated.

The developers for the project chat on an IRC channel as well as the comments in the bug. I was greeted with a warm welcome and encouraged to ask questions. All in all it was a pleasant experience and I hope to be able to fix this bug.

 


by jlonghiblog at October 03, 2017 03:02 AM


Sofia Ngo-Trong

Building an Open-Source Software Package on Linux

Today, as part of Lab 4, we are going to build an open-source software package in Linux. I am using an x86_64 system (the Xerxes server at Seneca).

I checked out the Free Software Foundation’s GNU Project to find some free software. The list was so long and dizzying, so I checked out the blurbs on those packages to see which package I would like. Scrolling down the list, I came across the Automake package, which I thought would be useful because it is a tool that automatically generates Makefiles for you. I know this is ironic, because usually developers make Makefiles, which provides a formulaic shortcut to build and compile your source files into an executable binary instead of compiling it manually. Now, this tool would help me to make this Makefile … is that some sort of second-degree laziness? Hmm… Not sure…

Regardless, let’s get started.

First, we have to download the Automake TAR package. The Automake page gives us the link where we can download the TAR file via HTTP or FTP. The URL is http://ftp.gnu.org/gnu/automake/ , and it contains all of the Automake package versions. Scrolling near the bottom of the list, we can find the latest version. The extensions gz and xz refer to compressed files, using two different compression utilities. I will choose the file automake-1.15.1.tar.gz, which uses gzip for compression. It is 2.2 MB long.

After changing into the directory I want, I run this command:

wget http://ftp.gnu.org/gnu/automake/automake-1.15.1.tar.gz

which saves the file into my directory, like so:

tar file

Now, we have to unzip this tar file, using the following command:

tar -zxvf automake-1.15.1.tar.gz

Now I have an automake-1.15.1 directory. Entering it, I get this directory structure:

automake directory.PNG

The lab doesn’t want us to install the software – instead, we are just to build it.

So… how do we build it? Of course, when in doubt about a software package, you can always read the README file. Inside the file, it says to see the “INSTALL” file for information about how to configure and install Automake. Inside that file, there are instructions for “The simplest way to compile this package”. Hey, I like “simplest ways”. So I will follow its instructions.

The instructions are:

  1. Run ./configure , which will configure the package for our system.
  2. Type ‘make’ to compile the package.
  3. We can type ‘make check’ to run any tests that come with the package.
  4. And we can type ‘make install’ to install the program… but we won’t be going that far.

Okay, so we’re ready to start.

I type:

./configure

It runs a script, and creates some new files. Now, the directory looks like this:

automake directory2

It has created its own Makefile, as well as a config.log and config.status, amongst other changes.

Now, let’s compile the package. I type:

make

Here is the output:

make.PNG

And now let’s run some tests:

make check

A very long script gets executed. It seems to be testing a bunch of files in the t folder of the package. Most things get “PASS” and a number of things get “XFAIL”. Hmm…

The test takes a very long time to run. After waiting more than 10 minutes, I decide to just cut short the test. Here is a snippet of what the testing looks like:

testing

Instead, let’s just test the tool by using it ourselves.

[ several minutes later …]

After reading through Automake documentation, I just realized that it is not so easy to use the Automake tool; it is to be used in combination with Autoconf tool, and several configuration files have to be created first, from which Automake will be able to automatically create a Makefile.

Since we just need a quick test to make sure that the tool is working.

In the README file, it is mentioned that the package has a test suite, and that we can go to t/README for further information. So I go to the README file in the t folder. There, it says that we can run a manual test by typing the following:

./runtest t/add-missing.tap

So I type that… and off the script runs… At the end, it says:

test pass.PNG

I guess it passed, and the tool is working correctly!

 

So, it wasn’t too bad to build a software package. The configuration script and the Makefile make it very easy to build and compile a software package. For the next part of the lab, we will build a much more complex software package – the glibc package, which provides the core libraries for the GNU system, the GNU/Linux system, and many other systems that use the Linux kernel. A very important package, if you ask me! We will do that in the next post.


by sofiangotrong at October 03, 2017 12:57 AM

October 02, 2017


Matthew Marangoni

Automake and glibc - Code Building

Building an Open Source Software Package


Our first task for this exercise is to build (not install) an open source software package. A list of packages can be found at https://www.gnu.org/software/software.html, and the package I've chosen for this exercise is automake.

Here's a few steps on how we'll go about building the automake software package:

1. Download the automake package to our local directory:

wget . http://ftp.gnu.org/gnu/automake/automake-1.15.1.tar.gz


2. Unzip/Extract:

tar -zxvf automake-1.15.1.tar.gz

When we look at the extracted folders content's, we can see there is a INSTALL text file, which gives us all the instructions we need to build and compile, or install (which we will not do) the automake software package. Reading these instructions tell us that if we 'cd' to the directory containing the packages source code and run './configure'.

3. Configure:

From the INSTALL document instructions, it tells us the following about the configure script:
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation.  It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions.
Perfect, so it will generate a customized Makefile for our system! Lets run the command.

./configure

Here you can see a small snippet of the output from running this command:


4. Build/Compile:

With our new makefile generated, we can run the simple command:

make

...which will compile the package. If we wanted to do more - like compile and install - we could instead run 'make install'.


A number of new files have been generated as a result of the make command, specifically we are interested in the runtest script, which we will use in our next step to test that automake was properly compiled and is working.


5. Test that automake is working

Before we can make use of our runtest script, we need to provide it with some scripts to test with. Luckily for us, the automake software package includes a script which will generate test files for us. Let's run that script now:

./gen-testsuite-part

This adds various test files to the included t folder. We can now proceed with running the test.

./runtest t/gnumake.sh


Above you can see a small snippet of a test I ran with gnumake.sh. You can continue running tests with as many other files you want until you are satisfied that the software package was built and compiled properly.


Building and Testing the glibc Package


We'll follow a slightly different, but similar build & compile process as we did with automake.

1. Download the glibc package to our local directory:

wget . http://ftp.gnu.org/gnu/glibc/glibc-2.26.tar.gz


2. Unzip/Extract:

tar -zxvf glibc-2.26.tar.gz

3. Follow INSTALL instructions to prepare for configure

When we view the INSTALL file's instructions...

cat INSTALL | less

...we learn some important information.


The main takeaways here are that we MUST build and compile glibc in a separate build directory and that glibc CANNOT be compiled in the source directory. By doing this, we can remove the whole build directory in case any errors occur and get a clean start. Lets do this now. Make a new directory in the same location we extracted glibc:

mkdir glibc-build

Change into a our newly created directory and get the absolute path of it (we will need this in a future command)

cd glibc-build

pwd

3. Configure:

From our build directory, we can now run the configure script like we did earlier, with the extra prefix argument. This tells us where we want the GNU C Library to be installed (you must provide an absolute path for this to work, which is why we ran pwd earlier).

../glibc-2.26/configure --prefix=/home/mmarangoni/spo600/lab4/glibc-build

Here's some of the output from our configure command. You can see below that there's a warning which tells us we're missing makeinfo, resulting in some features or tests being disabled. That's no good, so lets make sure we get that package and then we can run the configure script again.


The package we need that contains makeinfo is actually named texinfo, lets install that now.





With our texinfo package installed, we can re-run the compile command from earlier which will result in a successful compile with no errors or warnings - great. This will have generated a Makefile for us.

4. Build/Compile:

With our newly created Makefile, we can run our nifty command which will compile our package:

make

This step takes a bit of time, but will compile our local version of glibc in our glibc-build directory.

4. Test:

Before we can jump into testing, the INSTALL file tells us we'll need a few other packages to perform some of the tests. These include Python 2.7.6/3.4.3 or later, PExpect 4.0 or later, and GDB 7.8 or later with support for Python 2.7.6/3.4.3 or later. My machine already had a sufficient version of Python installed, but I still needed PExpect and GDB, lets install those now.

sudo pip install pexpect
sudo dnf install gdb
sudo dnf debuginfo-install python

The extensions package includes debugging symbols and adds Python-specific commands into gdb which we will need for our tests later. Make sure your gdb installation is configured with Python support (simply having both Python and gdb installed does not accomplish this). We can verify this by running 'gdb', followed by 'show configuration'. You should see something similar to the image below which shows gdb was configured with Python support.



Now we are ready to begin our tests. The INSTALL file tells us we can run the 'make check' command which will build and run test programs which exercise some of the library facilities. Let's do that now:

make check

This command will likely take a while, but at the end will provide you with a result of how many tests succeeded, failed, and were unsupported. Below you will find my results, which for the most part succeeded.


Lets try testing our local version of glibc with our own test file. A simple hello world program should do.


Working as intended, great!

Overriding mechanisms and multiarch

Override

To understand how we can override default functions, we need to understand how the dynamic linker/loader works. For a in-depth description, in your terminal run the command:

man ld.so

Alternatively, you can read about it here.

In brief, the programs ld.so and ld-linux.so find and load the shared objects (libraries) needed by a program, prepare the program to run, and then run it. Linux binaries require dynamic linking unless compiled with the -static option. ld.so handles a.out binaries, and ld-linux.so handles ELF.

When resolving shared object dependencies, the dynamic linker first inspects each dependency string to see if it contains a slash. If a slash is found, then the dependency string is interpreted as a (relative or absolute) pathname, and the shared object is loaded using that pathname. If a shared object dependency does not contain a slash, then it is searched for in the following order:
  1. Using the directories specified in the DT_RPATH dynamic section attribute of the binary (deprecated)
  2. Using the environment variable LD_LIBRARY_PATH
  3. Using the directories specified in the DT_RUNPATH dynaic section attribute of the binary
  4. From the cache file /etc/ld.so.cache
  5. In the default path /lib or /lib64, and then /usr/lib or /usr/lib64
We can also make use of /etc/ld.so.preload, which is a file containing a whitespace-separated list of ELF shared objects to be loaded before the program (/etc/ld.so.preload has a system-wide effect, and is generally avoided).

Finally we have the environment variable, LD_PRELOAD which contains a list of user-specified ELF shared objects that get loaded before all others. The list is space or colon separated, and can be used to selectively override functions in other shared objects.


Multiarch

Multiarch is the capability of a system to install and run applications of multiple different binary targets on the same system ie. running a i386-linux-gnu application on an amd64-linux-gnu system. Multiarch simplifies cross-building, where foreign-architecture libraries and headers are needed on a system during building.

Currently systems allow for the co-installation of libraries and headers for different architectures, but not binaries. Multiarch integrates support for cross-architecture installation of binary packages, immediately improving on this.

An immediate benefit of multiarch is that it makes a wider array of 32-bit applications available to 64-bit users, which are now becoming more commonplace. More in-depth information regarding multiarch can be found at https://wiki.debian.org/Multiarch/ and https://wiki.ubuntu.com/MultiarchSpec.

If you are interested on how packages can be converted for multiarch, that information can be found here.


Multiarch and gcc:

When using the gcc compiler, you can specify whether to enable or disable multiarch support by adding the option --enable-multiarch. The default is to check for glibc start files in a multiarch location, and enable it if the files are found. The auto detection is enabled for native builds, and for cross builds configured with --with-sysroot, and without --with-native-system-header-dir.

by Matthew Marangoni (noreply@blogger.com) at October 02, 2017 05:16 PM


Pablo Calderon

Lab 2 – SPO600 Compiled C Lab

After creating a simple c program:

#include
int main() {
printf(“Hello World!\n”);
}

We have to convert this c program into assembly code to further dissect it. Through assembly code we can better understand how the code functions, allowing us to optimize it.
Once we compile the code through this command:

gcc -g -O0 -fno-builtin helloworld.c

And then use the output file to enter the assembly code through this command:

objdum –source a.out

The following is what we got. This is the main function of the code in the assembly language. Within the main you can see the values being pushed and moved. For the “mov” you can see that it takes two locations, the former being where it is currently, and the latter being the destitation. It is going from one register to another. Eventually it get to callq which is the program telling the compiler to print the message.

spo600_lab2_1

 

1. In the next steps we add the static compiler option:

gcc -g -O0 -fno-builtin -static helloworld.c

Just from looking at the picture below, you can see that the a.out file (in the red box which contains the static option) is much bigger than the previous iteration of the gcc command (outlined in the blue box).

spo600_lab2_2.png

Also with the -static option, there are a lot more section headings. Most of them have to do with memory and how much of it is free, as well as slot info. The -static option takes a dynamically linked file and inserts it directly into the object file. Also, in the main we can now see that it calls _IO_printf, as oppose to the traditional printf.

 

2. In this step we take out the -fno-builtin option. As you can see below it changes printf to puts

spo600_lab2_3.png

 

3. Next, after removing the -g compiler option, this took out the debugging information. By doing this, this made the file smaller as one can see in the picture below.

spo600_lab2_4

 

4. In portion of the lab we add more arguments to the printf function in the code to see how that affects the assembly code. As you can see below more registers are used to contain the arguments. To me it seems as though some arguments share a register and others have their own register.

spo600_lab2_5.png

 

5. In this step we add another function to the code to see how it differs from the original helloworld code. As you can see below, the function output has its own section and it calls on printf to print “Hello World!”. In this main section, the main calls on output as oppose to printf.

spo600_lab2_6.png

 

6. Finally, with the -O3 flag option, the code seems to be running at a higher level of optimization. It is completing the same task as the original helloworld code, but with fewer lines in the assembly code (as you can see below).

spo600_lab2_6

 

To conclude this lab (2), this is my only exposure to assembly level code, and to me it seems hard to read compared to c. But I guess as the course progresses, I’ll get better at reading it and coding it. It just takes practice.


by pabinski at October 02, 2017 07:28 AM


Aaron Brooks Patola

Build and Testing glibc

For our next build process, we are going to test out the GNU C Library (glibc), which provides the core libraries for the the GNU system and many more that use Linux as the kernel.

First we must find and install the source code on our system. A quick glance at their website informs me the latest released version is version 2.26 (2017-08-02), so this is what we will download now…

As usual, with our wget and tar commands…

wget:tar.png

Extraction complete! As with our last open source build (gnugo) this one also has an INSTALL file   , so lets take a look at it and see what we have to do to build this beast…

INSTALL.png

A bit more confusing than building the gnugo source as its talking about configuring into different directory paths…

First we must make a new directory entitled “glibc-build” at the same directory level as were we downloaded the source files. Once that is complete, from our new build directory we can issue a configure command using the mandatory –prefix option with our current build directory appended. Finally once this is done, we can issue a make command to build the source. Lets give it a try now…

configure.png

Seems as everything has configured properly, onto the make!…

make.png

After a long 14 minute process it finally finished….

After another lengthy amount of time wondering why I couldn’t locate the .c files and could only see .o, I realized I had to navigate back into the downloaded new versions directory.

We are now asked to test the library that we have built by introducing a bug in the behaviour of a function. After browsing many functions it seems the simplest one to test is rand() found in rand.c located in the stdlib folder. So lets make a simple program that uses the function…

random program.png

This will generate ten random numbers as follows…

expected rand output

Notice the use of the testrun.sh executable provided to us in our glibc-build directory that makes our  programs source code use our custom built library functions. Lets now introduce a small bug in the rand file…

rand bug.png

Now we will have to re configure and make our library…

The build took much less time this go around thankfully (sub 60 seconds).

Lets see what happens when we run our random program again…

rand bug output.png

Our bug has worked! It only prints the number “1”!

For now it seems there are many functions to explore in this library that may be fun to play with and implement even more bugs! or possibly even find some bugs to fix!

 


by bpatola at October 02, 2017 02:58 AM


Ronen Agarunov

SPO600 - Lab 4 - Code Building Lab

In lab number 4, we will be building a software package.
In the first step, we will choose a software package from the Free Software Foundation's GNU Project.
A full list of software packages can be found here: Link.

I have decided to build the software package of Barcode. Barcode is a "a tool to convert text strings to printed bars. It supports a variety of standard codes to represent the textual strings and creates postscript output."

Now after picking the package we would like to work with, we will log into our Linux system and download the package using "wget" command:
 wget ftp://ftp.gnu.org/gnu/barcode/barcode-0.99.tar.gz
This is what we should be getting from the console:
barcode-0.99.tar.gz 100%[===================>] 869.85K  2.87MB/s    in 0.3s

2017-10-01 19:47:58 (2.87 MB/s) - ‘barcode-0.99.tar.gz’ saved [890730]
 Next, we will unzip the file we have downloaded using the "tar" command:
 tar -zxvf barcode-0.99.tar.gz
 After we unzipping the file, we can see there should be an instruction file (often named as INSTALL). In this case, the INSTALL file tells us to look at INSTALL.generic for basic instructions. By reading the file INSTALL.generic we can see the following instructions:

From the document, we understand that the next step would be to run the "configure" command.
After the configuration is done, we will run the command "make" to compile the package.
The fourth step "make install" would install the programs and relevant data, which is something we do not want, so we won't do this part.
 After running the command "make" which should be finished in a few minutes, we will get a new file called "barcode".
By running it we can test the software package:


It works!


Part 2: Build and test glibc

In this part we will find and build the source code for the latest released version of the GNU Standard C Library (glibc), which can be found at the glibc website.
Now we will download it to our system using the "wget" command:
wget http://ftp.gnu.org/gnu/glibc/glibc-2.26.tar.gz
 This is what we are supposed to be getting:
glibc-2.26.tar.gz   100%[===================>]  28.00M  19.6MB/s    in 1.4s

2017-10-01 20:00:41 (19.6 MB/s) - ‘glibc-2.26.tar.gz’ saved [29355499/29355499] 
Next, we will unpack the file using "tar -zxvf".
Same as with more other software packages, the installing instructions are within the INSTALL file, which we will now open and skim through it.
The INSTALL file states that:
The GNU C Library cannot be compiled in the source directory.  You must
build it in a separate build directory.  For example, if you have
unpacked the GNU C Library sources in '/src/gnu/glibc-VERSION', create a
directory '/src/gnu/glibc-build' to put the object files in.  This
allows removing the whole build directory in case an error occurs, which
is the safest way to get a fresh start and should always be done.
As a safety measure so we will create a new folder called "glibc-build" and compile the file there, using a prefix to our command:
../glibc-2.26/configure --prefix=/home/ragarunov/glibc-build
Then we will run the command "make".
After a long compiling process, we can finally begin to test our library!

Testing:
The library provides us the file "testrun.sh" that can be used to test our own version. Using that, we can test our version of glibc by creating a simple Hello World program in C:

It works!
Now, we will try to put a bug and run the program. We will do so with a simple array and a loop:
[ragarunov@xerxes glibc-build]$ cat test.c
#include <stdio.h>

int main () {
        int num[4] = {1, 2, 3, 4};
        int i;

        for (i = 0; i<5; i++) {
                printf("%d", num[i]);
                printf("\n");
        }

        return 0;
}
After compiling and running the command:
./testrun.sh /home/ragarunov/lab4/glibc-build/test
Prints:
1
2
3
4
1219370488
 In both tests, the library worked well and compiled the files as necessary!

Override:
The override mechanism is commonly used in object oriented programming languages as a feature to provide subclasses implementations that will replace (or override) that implementation that has already been given by its parent class.

Multiarch:
Multiarch is a term that refers the capability of a system to install and run applications of multiple different binary targets on the same system. It is used to simplify cross-building of libraries and headers that are required for a system during building.

by ron-spo (noreply@blogger.com) at October 02, 2017 12:49 AM

October 01, 2017


Azusa Shimazaki

Bugs on open source

Open source is huge, It was out of my understanding.
Even after learning some information, open source was still mysterious for me.

????

For a class project, I have been requested to fix a bug of the open source project.
First, I tried to find bugs, but I could not understand the bug on the reports, or understandable ones were already taken. I was not sure which one is good, and which I should choose.

????

Overwhelming. I got lost totally, so I asked help my professor.
After a conversation, he found out my direction, then introduced some bugs from the website called "thimble".
thimble
https://thimble.mozilla.org/en-US/


The thimble is an online code editor which has a real-time preview function.
The website is visualistic.
Since I have a text allergy, I felt it fit me to work on bugs of this site.


From my professor's suggestion, I chose following two issues which interest me.


1.

Add support for auto downsizing non-animated GIFs with ImageResizer #2307

https://github.com/mozilla/thimble.mozilla.org/issues/2307

The issue is, when you drug a picture file to the thimble dashboard,
the picture is resized to smaller, but if it was .gif file that contains multiple pictures,
it is not resized.



2.

Feature Request > Indicate font size next to font size UI#2118

https://github.com/mozilla/thimble.mozilla.org/issues/2118

The text size of the thimble dashboard is controllable by + and - button.
However, it is requested to show what text size is used now.




The reason is I chose is that the goal is very clear and easy to see the result.
I hope to learn the bug fixing process of start to end by working on these bugs and would like to be able to swim freely in the opensource world.
I am a bit nervous about if I can fix them and posting a comment on the community. To be sure, I will need to review javascript.


Apparently, the developers on the project are on and Mozzila chat and GitHub.

Mozilla Chat
 https://chat.mozillafoundation.org/mozilla/channels/thimble

thimble Github page
https://github.com/mozilla/thimble.mozilla.org


I guess it takes 2 to 3 weeks to solve the bug.
I hope I can do it faster!

    by Az Smith (noreply@blogger.com) at October 01, 2017 08:50 PM


    Aaron Brooks Patola

    Building an Open Source Software Package

    Today we’re going to be building an open source software package from the Free Software Foundation’s GNU Project. How exciting!! Now lets pick a package to build…

    Initially I’m very surprised with the vast amount of packages that are offered to the developer. A total of 395 by my estimation, although one package in particular caught my eye as I was navigating through the list. I have been a lifelong fan and player of the game of Chess, and never really looked into the game of Go which seemed to have some similarities, yet there the package was … labelled as “gnugo”. The game caught mainstream attention this year when Google’s AlphaGo AI won three matches against the worlds best Go player.

    With the selection in place, lets download the source code for the software (we will not be installing the software on the system)…

    We will use wget to download the file and then use tar to compress and extract the file…

    wget.png

    seems like it worked, now lets look into the INSTALL file and see what is the recommended path to take for building this package…

    install.png

    We will need to first use the “configure” command to build this package…

    configure

    Now for the “make“…

    make

    For some deeper learning, you can read more about these commands and what they do here.

    Since everything seemed to work, lets try to run a “make check” command…

    make check.png

    After approximately two minutes of waiting for the command to finish, its interesting to note that it goes into all the directories and reports back there “is nothing to be done”, lets continue…

    We have now downloaded and built the package and it is time to do a test run to make sure we can actually use this program…

    For gnugo the developers recommend running it with a graphical user interface called CGoban. However it is possible to run the game using the Ascii interface, and that is the route we will take…

    After looking into the directory structure I found the executable in the interface folder…

    gnugo

    Sweet!! we have the game up and running on the system! Lets see what happens if we attempt to make a move…

    moves.png

    Now that is cool! I also am fond of the “GNU Go is thinking. . .” string to make it seem more AI based. After it has selected a move, we are now asked to make one in return. Since I do not know Go strategy at all, I will stop here and read some documentation on strategies for gameplay to have an epic encounter in the future with this program!

    As a  final aside, it was also very interesting to be able to go inside the games engine directory and look at all the C files…

    engine.png

    an example from “dragon.c“…

    dragon.png

    Pretty sweet! This has propelled me to become very excited for learning more about building open source software packages and hopefully contributing to some in the near future.


    by bpatola at October 01, 2017 06:48 PM


    Jiel Selmani

    One Text Editor To Rule Them All

    Photo by Redd Angelo on Unsplash

    Alright, I'm sure you're thinking "the title can't be correct," and you know what?  You're right.  I don't think that there is any text editor that is the master of them all.  Competition is healthy and in open-source, does that word really even exist?

    This week, I'm taking a look at different code editors to see what works best for my workflow.  The editors in question are Atom and Visual Studio Code.  Being someone who is interested more in function over form, I typically enjoy working on the server-side, using IDE's like Visual Studio 2013-2017 when working with languages like C++ (my favourite).  This past summer, I finally got my hands dirty working solely with Atom and funny enough, I wrote code for the server-side within it using Node.js and its robust frameworks.  However, none of that is important right now.  What is important is which is better for me?

    Atom

    Atom is an excellent text-editor (built by GitHub) that I really enjoy using. With integrated Git support right out of the box it just speeds up the workflow the minute you start using it.  Adding and/or removing projects is as simple as Ctrl+Shift+A and then choosing from the explorer in Windows. Navigating through the project structure is simple too, with a Tree View pane on the left and Ctrl+Shift+F, you can easily find what you're looking for in a matter of seconds.  The GIF below will demonstrate as I search for the 'process' variable across the project.



    Visual Studio Code

    Visual Studio Code utilizes many of the same key bindings.  The reason for this is likely because both editors are built on top of the Electron platform; consequently developed and built by GitHub.  Since I was already familiar with Atom, I decided that I would focus my time working with Visual Studio Code to get a feel for what else the world has to offer.

    To get started, I downloaded the Best Resume Ever source code and added the project folder by using a different key binding (Ctrl+K Ctrl+O) that I later changed to match Atom's (Ctrl+Shift+A). Here's an image showing you what it looks like.


    Visual Studio Code
    Super awesome.  We have a tree view on the left (and it's colourful) and we have access to the Terminal, Debugger, Warnings/Errors, and Output as seen on the bottom portion of the screen.  Just like Atom, Git Integration is built-in and repositories can be utilized for version control with ease.  Alright, let's start configuring this.  I'll start with tab indentation.

    I personally don't like using the really large tabs.  I know it makes it easy to read code, but when you start writing and you have a horizontal wave starting, it becomes a nuisance to read.  Right out of the box, VS Code attempts to guess your indentation style based on the file you've opened.  We can change that to match our own personal style, or the style of the organization/company we work for so everything is consistent.  Not a difficult task as you can see in the GIF below.


    By using the keybinding Ctrl+, we are able to see all of the default settings that the text editor offers.  For the sake of speed, I had already cut the values I wanted to change and pasted them as new User Settings.  If you're curious which ones I used to set the tab indentation to 2 spaces rather than 4, search for "editor.tabSize".  To make sure your settings don't get overwritten on every project, set the "editor.detectIndentation" value to false.  You're all set.

    Extensions

    One of my biggest pet peeves is when I'm working and get on a roll, only to find out that I'm having trouble pulling into a repository.  I'm sure many developers have run in to this, and it certainly sucks.  Luckily, with some searching and a little inspiration from a classmate's post, I found Git Merger.  

    GitMerger
    Git Merger is a Merge-Resolving extension that makes merging project files easier and less of a headache.  Using it, we can merge a branch into our working branch, abort merges, stash our work and then unstash when we're ready.  Thankfully, we can use this directly from the Command Palette (Ctrl+Shift+P) and enter simple commands that help us make our stressful developer lives easier.  

    Speaking of easier, JavaScript is becoming more powerful and is making incredible moves into nearly every platform.  Mobile, Desktop, Servers...you name it, JavaScript is there.  So why not focus on linting my JavaScript to make sure my code meets industry standard.  JSHint is a VS Code extension that makes sure that your JavaScript files are clean.

    JSHint
    Without opening my own project files, JSHint immediately detects that there are some errors in the Best Resume Ever project.  In the node directory in the app.js file, JSHint detects 22 warnings.  It's not necessarily perfect because there's some ES8 JavaScript formatting that hasn't been accounted for like Async/Await functions, but it works which is great.  You can see it in action below.

    JSHintWorking

    Looking To The Future

    Moving forward, I think I'm going to continue to invest my time in VS Code.  I don't have anything against Atom (still love using it) but the VS Code interface becomes much more intuitive the more I use it.  With integrated debugging features, a lot of packages, and access to the Terminal in the application I think I'm going to keep finding new ways to improve my workflow.  Next step, integrate the C++ compiler so I can keep working with my favourite language...in any OS environment.

    by Jiel Selmani (noreply@blogger.com) at October 01, 2017 06:24 PM


    Eric Ferguson

    Building Open Source Software and the GNU Standard C Library (Lab 4 Part 1)

    An amazing development in the GNU Project is cflow, a control flow charting software for C programs. I built the software on Matrix (Linux server) under normal priveleges and the steps were very simple, all are documentted below:

    1) transfer the package to the server via sftp after downloading the most recent tar.gz file.

    2) Extract the package using tar -xvzf (package name) in its current directory

    3) cd into the new directory

    4) Run ./configure [here the makefile is being configured based on found settings]

    5) Run make [the dependancies for cflow are being created]

    6) cd src

    7) Run the command using cflow (filename), I tested this with cflow's main.cpp. The results are here.

    I found the process very simple and interesting, it did take some exploring to find where to run the command however.

    In the next part I will be builidng and testing the lastest version of the GNU Standard C Library.

    by Eric Ferguson (noreply@blogger.com) at October 01, 2017 03:57 AM