Planet CDOT

March 25, 2017


Derrick Leung

Compiling/building glibc

Attempt to build glibc on matrix/zenit servers.

Following gnu.org’s instructions on building/compiling the latest version 2.25 (https://sourceware.org/glibc/wiki/Testing/Builds):

(Building without installing)

$ mkdir $HOME/src
$ cd $HOME/src
$ git clone git://sourceware.org/git/glibc.git
$ mkdir -p $HOME/build/glibc
$ cd $HOME/build/glibc
$ $HOME/src/glibc/configure --prefix=/usr
$ make

 

Ran into issues with disk quota on both zenit and matrix during the git clone process. Even with deleting unnecessary files, it seems that the quota is limited for accounts given by the school (for obvious reasons).

 


by derrickwhleung at March 25, 2017 01:10 AM

March 24, 2017


Dang Khue Tran

OSD600: LAB 8: OPEN SOURCE TOOLING AND AUTOMATION PART 2 – UNIT TESTING

In today’s OSD600 lab, I continued with the code from the previous lab on Unit Testing with the two functions in this repo.

The point of Unit Testing?

Unit Testing helps us with making sure that our code does what we expect and we won’t break it as we change the code.

Automated unit tests can also act like documentation for anyone would like to contribute to your code and ensure they don’t break your code.

Writing Unit Tests

I chose Jest by Facebook for the framework that I am going to use for Unit Testing.

The first thing I did was installing Jest into my repo using Node Package Manager:

$ npm install --save-dev jest

To automate our test, we can do setup the script in the package.json file to run jest like this:

"scripts": {
    "lint": "node_modules/.bin/eslint *.js",
    "jest": "node_modules/.bin/jest",
    "test": "npm run -s lint && npm run jest"
},

“lint” and “jest” are basically a shortcut to the executable of jest and lint to run on our code.

“test” is the command that we are going to run in order to run both “lint” and “jest”

A simple test will look something like this:

// First require (e.g., load) our seneca.js module
var seneca = require('./seneca');

/**
 * Tests for seneca.isValidEmail()
 */
describe('seneca.isValidEmail()', function() {

  test('returns true for simple myseneca address', function() {
    var simpleEmail = 'someone@myseneca.ca';
    expect(seneca.isValidEmail(simpleEmail)).toBe(true);
  });

  test('returns false for a non-myseneca address', function() {
    var gmailAddress = 'someone@gmail.com';
    expect(seneca.isValidEmail(gmailAddress)).toBe(false);
  });

});

Firstly, we get a reference of the functions we want to test into the variable seneca.

The describe function is used to group a set of tests into a group for better organization.

The test function is where we can define a test. expect receive a function call and whatever it return must be the same as the variable passing into the toBe function.

Conclusion

I have had experience with Unit Testing on Java before and it is quite useful and it is the same here with JavaScript. Sometimes, it is also fun to do.

Unit Testing is simple to learn for this lab but I also learn something else as I worked with the code more than the last lab. I stumbled on a few eslint rules was defined in the AirBnB convention. Therefore, I had to look up the eslint documentation in order modify the rules in order to pass the linting process.


by trandangkhue27 at March 24, 2017 07:06 PM


Christopher Singh

Lab 7 – Open Source Tooling – OSD600

In this lab I learned how to set up tooling for a repository. I learned how to set up a new Node.js module and customise it. I did a little JavaScript coding for my module which validates a Seneca email, or create one very primitively. I also learned how to set up linting for my project as well as made sure it automatically ran when executing npm. Finally, I set up Travis CI on the repository so that every time I pushed a commit, it would run these tests. I added the badge to my README file.

I ran into a number of problems while doing this lab. First of all, being on Windows, I couldn’t use Git Bash for the first parts. Configuring eslint was also a mess. For some reason, on Git Bash, I could not use the arrow keys to select the options I wanted. Instead, it was trial and error, using the number keys (alternative I found through research) to guess which option I had chosen. Another long, arduous process was attempting to satisfy the linting check. It’s incredibly critical and took me many commits to get it right. It ensures consistency at the cost of convenience. One thing that really irritated me though was that I couldn’t use the common way of concatenating strings (+). Instead I had to use a built-in function. It also complained about how I couldn’t use ‘var’, but instead a ‘let’. A final complaint I have is how long it takes a new Travis CI build to start after pushing a new change. However, I shouldn’t complain because it is free, after all.


by cgsingh at March 24, 2017 02:26 PM


Timothy Moy

OSD600 Lab7: Linting & Travis-CI

For the most recent lab, we were tasked to get comfortable with linting our work and basic usage of the testing platform: Travis-CI. Since I will provide an overview of the work, for those who want an in depth guide the link to the step by step instructions of the lab can be found here.

Setting Up a Playground

First off, we need to create a repository to test everything in. It’s a simple process that can be done in github using the graphical interface.

Settings required for the new repository:

  • initialize with a README.md
  • add a .gitignore for NODE
  • use MIT for license

After creating it, clone it to your local machine using:

git clone git@github.com:[your-github-username-here]/[your-repo-name-here].git

Next we need to initialize node.js for the project.

Note: due to problems with the arrow keys, windows users must use normal command prompt for the command.

npm init

Now that that’s out of the way, create an index.js file (replace with whatever name if you used a different default) and write some basic code so we will have something to test with.

Linting and Automating It

Linting

Now we want to ensure the code we write is up to standard. The linting program we used was ESLint, but we installed it through node using the command:

npm install eslint --save-dev

Note that this installs it for the current project only!

To configure it, use the command:

./node_modules/.bin/eslint –init

Be aware that windows requires you to use the normal command prompt instead of gitbash (arrow key problem) and that you must use backslashes “\” instead of forward slashes (since we are now going through directories). For other operating systems, no changes should be needed.

Setting the linter to use the popular style guide from airBnB without REACT and having the config file in JSON should be adequate for our purposes.

./node_modules/.bin/eslint [filename].js

The above command will lint whatever file we give it. *.js can also be used to lint all javascript files in the specified directory.

Automating

To automate the lint via a script command, we need to modify the scripts section of our package.json using:

"scripts": {
 "lint": "node_modules/.bin/eslint *.js",
 "test": "npm run -s lint"
}

After that has been done you can run “npm test” (no debug) or “npm lint” (with debug) to lint your javascript files.

Alternatively, if you are using atom you can install the package “linter-eslint”. This will have several other dependencies you need to install, but once it has been installed, it will lint as you type!

Travis-CI

Travis is like github: an intuitive interface for something much more complicated. It is often used with github to provide testing for projects, which matter to us because you can login with your github account (login here)!

Setup:

  1. login with your github account
  2. go to your travis-ci profile page and authorize access to your github repositories
  3. turn on access to the specified repository
  4. create a basic .travis.yml file
  5. language: node_js
    
    node_js:
    
    - "node"
  6. push a new commit with all your files to github
  7. add a badge to your README.md by inserting adapting this (or click the badge in travis for the copy/paste text):
  8. [![Build Status](https://travis-ci.org/%5Buser-name%5D/%5Brepository-name%5D.svg?branch=master)%5D(https://travis-ci.org/%5Buser-name%5D/%5Brepository-name%5D)
  9. ensure your build is passing!

If you have any errors, check the messages and fix as appropriate. Otherwise, I found the steps to be simple and straightforward. This is definitely a good exercise to get a good development environment set up and we will continue to expand upon it next lab.

 


by Timothy Moy at March 24, 2017 04:55 AM


Wayne Williams

GLIBC: Learning to Build It

One of the first things we need to do in order to get our work on GLIBC incorporated into the mainstream library is to test it. In order to test it, we need to build it and make changes and see the results of our changes.

In this post, I will document my efforts to build glibc on my local machine.
Since changes to glibc happen from time to time, its probably better to get the source files from Git, and then get new changes as they come. The website suggests doing:


git clone git://sourceware.org/git/glibc.git
cd glibc
git checkout --track -b local_glibc-2.25 origin/release/2.25/master

I followed the code precisely and now I have the cloned repository in my local machine with a branch. From there I made another branch called 'mktime-optimize' so that I can do my own work without messing up the master branch.

For building instructions, a link is given: https://sourceware.org/glibc/wiki/Testing/Builds

I think I want to build without installing, and hopefully the build will be enough to run tests on. The website gives a basic prototype:


$ mkdir $HOME/src $ cd $HOME/src $ git clone git://sourceware.org/git/glibc.git $ mkdir -p $HOME/build/glibc $ cd $HOME/build/glibc $ $HOME/src/glibc/configure --prefix=/usr $ make

Now.. I will apply the same logic to my branch and see what happens. Since I was in my home directory when I cloned the repository, I'll check for /src inside the glibc/ directory I have. I didn't see any /src, so this is confusing me a bit. I think that /src is a directory name that we create so that we don't get confused about which file group we are in. I THINK, as long as I remember that the /glibc on my root directory is the "source"... I should be fine. I'm going to try: mkdir -p /build/glibc, while in the root directory (not inside /glibc) and see what happens..

So, it didn't work, but using ' mkdir -p $HOME/build/glibc ' worked out and there is a /build directory in my root directory. Next, for step 5, I typed ' $HOME/glibc/configure --prefix=/usr (making sure to omit the /src part). I got the following:












It seems I have a problem since I am trying to build a Linux library system inside of Windows. I'm going to try to get a gcc or something close enough to work with:

http://stackoverflow.com/questions/6394755/how-to-install-gcc-on-windows-7-machine
https://sourceforge.net/projects/mingw-w64/files/

I started the download and selected the following options:

















Still didn't work.. so in a mixture of frustration and also some curiousity, I did all the build steps inside the Xerxes account (because its Linux-based out of my Matrix account). And.. after several minutes of seeing text screaming away.. it finished and appears to be built inside XERXES!! Hopefully I didn't or will not soon break the server!!

by Siloaman (noreply@blogger.com) at March 24, 2017 04:17 AM


Lawrence Reyes

SPO600 Project – blog #2

To know the differences in performance between the different architectures, I created a small program to see what was the difference in time when running them both between both AARCH64 and X86_64 architectures.

Basic testing program:

#include <string.h>
#include <stdio.h>

int main(void){
char* res;
for(int i = 0; i < 10000000; i++){
char* hash = “HelloMyNameIsLawrence”;
char* key = “Is”;
res = strstr(hash, key);
}
printf(“%s\n”, res);
}

Results in X86_64:

/*
The program was run with the given version of the function strstr at the moment
*/

Command:
gcc -std=c99 tst-strstr.c

Time:
real 0m0.491s
user 0m0.490s
sys 0m0.001s

Disassembly:
0000000000400546 <main>:
400546: 55 push %rbp
400547: 48 89 e5 mov %rsp,%rbp
40054a: 48 83 ec 20 sub $0x20,%rsp
40054e: c7 45 f4 00 00 00 00 movl $0x0,-0xc(%rbp)
400555: eb 2b jmp 400582 <main+0x3c>
400557: 48 c7 45 e8 30 06 40 movq $0x400630,-0x18(%rbp)
40055e: 00
40055f: 48 c7 45 e0 46 06 40 movq $0x400646,-0x20(%rbp)
400566: 00
400567: 48 8b 55 e0 mov -0x20(%rbp),%rdx
40056b: 48 8b 45 e8 mov -0x18(%rbp),%rax
40056f: 48 89 d6 mov %rdx,%rsi
400572: 48 89 c7 mov %rax,%rdi
400575: e8 c6 fe ff ff callq 400440 <strstr@plt>
40057a: 48 89 45 f8 mov %rax,-0x8(%rbp)
40057e: 83 45 f4 01 addl $0x1,-0xc(%rbp)
400582: 81 7d f4 7f 96 98 00 cmpl $0x98967f,-0xc(%rbp)
400589: 7e cc jle 400557 <main+0x11>
40058b: 48 8b 45 f8 mov -0x8(%rbp),%rax
40058f: 48 89 c7 mov %rax,%rdi
400592: e8 99 fe ff ff callq 400430 <puts@plt>
400597: b8 00 00 00 00 mov $0x0,%eax
40059c: c9 leaveq
40059d: c3 retq
40059e: 66 90 xchg %ax,%ax
Command:
gcc -std=c99 -O2 tst-strstr.c

Time:
real 0m0.001s
user 0m0.001s
sys 0m0.000s

Disassembly:
0000000000400400 <main>:                                                                                                              400400: 48 83 ec 08 sub $0x8,%rsp
400404: bf bb 05 40 00 mov $0x4005bb,%edi
400409: e8 e2 ff ff ff callq 4003f0 <puts@plt>
40040e: 31 c0 xor %eax,%eax
400410: 48 83 c4 08 add $0x8,%rsp
400414: c3 retq
400415: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1)
40041c: 00 00 00
40041f: 90 nop

Results in AARCH64:

/*
The program was run with the given version of the function strstr at the moment
*/

Commands:
gcc -std=c99 tst-strstr.c

Time:
real 0m0.742s
user 0m0.740s
sys 0m0.000s

Disassembly:
0000000000400620 <main>:
400620: a9bd7bfd stp x29, x30, [sp,#-48]!
400624: 910003fd mov x29, sp
400628: b90027bf str wzr, [x29,#36]
40062c: 1400000e b 400664 <main+0x44>
400630: 90000000 adrp x0, 400000 <_init-0x430>
400634: 911ca000 add x0, x0, #0x728
400638: f9000fa0 str x0, [x29,#24]
40063c: 90000000 adrp x0, 400000 <_init-0x430>
400640: 911d0000 add x0, x0, #0x740
400644: f9000ba0 str x0, [x29,#16]
400648: f9400fa0 ldr x0, [x29,#24]
40064c: f9400ba1 ldr x1, [x29,#16]
400650: 97ffff98 bl 4004b0 <strstr@plt>
400654: f90017a0 str x0, [x29,#40]
400658: b94027a0 ldr w0, [x29,#36]
40065c: 11000400 add w0, w0, #0x1
400660: b90027a0 str w0, [x29,#36]
400664: b94027a1 ldr w1, [x29,#36]
400668: 5292cfe0 mov w0, #0x967f // #38527
40066c: 72a01300 movk w0, #0x98, lsl #16
400670: 6b00003f cmp w1, w0
400674: 54fffded b.le 400630 <main+0x10>
400678: f94017a0 ldr x0, [x29,#40]
40067c: 97ffff89 bl 4004a0 <puts@plt>
400680: 52800000 mov w0, #0x0 // #0
400684: a8c37bfd ldp x29, x30, [sp],#48
400688: d65f03c0 ret

Comamnds:
gcc -std=c99 -O2 tst-strstr.c

Time:
real 0m0.001s
user 0m0.000s
sys 0m0.000s

Disassembly:                                                                                                                 0000000000400470 <main>:
400470: a9bf7bfd stp x29, x30, [sp,#-16]!
400474: 910003fd mov x29, sp
400478: 90000000 adrp x0, 400000 <_init-0x3f0>
40047c: 911a6c00 add x0, x0, #0x69b
400480: 97fffff8 bl 400460 <puts@plt>
400484: 52800000 mov w0, #0x0 // #0
400488: a8c17bfd ldp x29, x30, [sp],#16
40048c: d65f03c0 ret

Conclusions:

As you can see the run-time of the program in AARCH64 takes longer than in X86_64 when the program is compiled without any optimization from the compiler. Which made me think that there is some room for optimization in the AARCH64 architecture.Although, there might seem to be some room of optimization when the compiler is not fully optimizing the program. It worries me that when the compiler is doing some type of optimization, the run-time on both architectures is reduced to just 0.001s. Due to this result, I tried to find the difference between the Assembly code generated to run this program but there was no difference between both the architectures.


by lawrencereyesoo at March 24, 2017 02:57 AM

SPO600 Project – blog #1

I have delayed the start of this project for health reasons. Now that I am finally able to work on it I have realized how hard it is and how much time it would actually take me to be able to have decent result to even be able to submit it to the community.

The hardest part of the process for me was picking the functions that I am supposed to optimize. Since I was not able to work on it until last week most of the functions that are reasonable to try to optimize are taken already, so I had to take the strstr() and mcheck() functions. As of now I am focusing more on the strstr() functions since I believe it would be the fastest to optimize.


by lawrencereyesoo at March 24, 2017 02:21 AM

March 23, 2017


Ray Gervais

When Segfaulting Won’t Do

An SPO600 Project Update
Sometimes, you have a great idea which may improve one of the worst processes a developer routinely experiences over and over, and sometimes your idea is so grand that reality escapes your grasp quicker and quicker with each passing second. This is what I had come to realize after discussing with Chris how I could benchmark my updated segfault function, to which his response was simply, “why?”

It seems, that in my excitement to optimize a common issue, I never thought to wonder if it would make a difference. I don’t mean the performance metric, but I mean to the developers. Segfault is not an attractive state to have in your code, nor is it a ‘feature’, so why would I improve a system which would not benefit the developer in anyway aside from shaving a few nano seconds off of their application’s crashing descent into a closed state? Chris raised quite a few points, expanding on the above and also looking into the code and quickly estimating the differences being negligible at best for the upstream developers; a factor which would make the persuading of said developers of the relevancy of my optimizations more difficult.

So, with my original suggestion being shelved, it’s time to look for a new function! That also means that, once I do find a new one; and granted it can be optimized, I’ll post about said optimizations or what I’m thinking. Hopefully, this is the last time I have to search in the GLibC library since I’d argue 80% if not 85% are very well optimized already.

by RayGervais at March 23, 2017 02:33 AM

March 22, 2017


Lucas Blotta

Lab 6 - Auto-Vectorization with gcc

This is an exercise on Auto-Vectorization. The purpose is to analyse a simple source code's compilation through its disassembly and modify the source in order to aid the compiler in making a more efficient binary.

The source was created by following these guidelines. The following functions take place in the main file and are responsible for adding pairs of array elements and saving into a third array, and adding all elements of an array into a single value, respectively.


//adds pairs of array elements
void addArr(int* p3, int* p1, int* p2, int length){

int i;

for(i=0; i<length; i++)
p3[i] = p1[i] + p2[i];

}

//adds elements of an int array
long addElems(int *arr , int length){

int i;
long sum = 0;

for (i=0; i<length; i++)
sum += arr[i];

return sum;
}


By compiling the main file on an Aarch64 system with no compiler options, we get the following disassembly for the first function:


0000000000400660 <addArr>:
400660: d100c3ff sub sp, sp, #0x30 --> Creating Stack
400664: f9000fe0 str x0, [sp,#24] --> Saving p3 pointer
400668: f9000be1 str x1, [sp,#16] --> Saving p1 pointer
40066c: f90007e2 str x2, [sp,#8] --> Saving p2 pointer
400670: b90007e3 str w3, [sp,#4] --> Saving length int
400674: b9002fff str wzr, [sp,#44] ---> assigning zero to i
400678: 14000014 b 4006c8 <addArr+0x68> --->Start of loop
40067c: b9802fe0 ldrsw x0, [sp,#44] ---> load i into r0
400680: d37ef400 lsl x0, x0, #2 --> Quadruples i (int = 4 bytes)
400684: f9400fe1 ldr x1, [sp,#24] --> loads p3 (pointer) into r1
400688: 8b000020 add x0, x1, x0 --> offsets p3 pointer by i, save in r0
40068c: b9802fe1 ldrsw x1, [sp,#44] --> load i into r1 (need to keep r0) (again)
400690: d37ef421 lsl x1, x1, #2 --> quadruples i (again)
400694: f9400be2 ldr x2, [sp,#16] --> load p1 (pointer) into r2
400698: 8b010041 add x1, x2, x1 --> offsets p1 pointer by i, save in r1
40069c: b9400022 ldr w2, [x1] --> load p1 int value into r2
4006a0: b9802fe1 ldrsw x1, [sp,#44] --> load i into r1 (need to keep r0) (again)
4006a4: d37ef421 lsl x1, x1, #2 --> quadruples i (again)
4006a8: f94007e3 ldr x3, [sp,#8] --> load p2 (pointer) into r3
4006ac: 8b010061 add x1, x3, x1 --> offsets p2 pointer by i, save in r1
4006b0: b9400021 ldr w1, [x1] --> load p2 int value into r1
4006b4: 0b010041 add w1, w2, w1 --> finally adds pair of elements, save in r1
4006b8: b9000001 str w1, [x0] --> store into p3 array
4006bc: b9402fe0 ldr w0, [sp,#44] \
4006c0: 11000400 add w0, w0, #0x1 --> Inc i
4006c4: b9002fe0 str w0, [sp,#44] /
4006c8: b9402fe1 ldr w1, [sp,#44] ---> loads i into r1
4006cc: b94007e0 ldr w0, [sp,#4] ---> loads length int into r0
4006d0: 6b00003f cmp w1, w0 ---> i - length
4006d4: 54fffd4b b.lt 40067c <addArr+0x1c> ---> if negative, loop again
4006d8: 9100c3ff add sp, sp, #0x30 ---> restores stack pointer
4006dc: d65f03c0 ret ---> return to caller

As we can see from the code, there is no vectorization happening. The function is simply getting one addition value per loop. Also, we notice that the program is using the registers very conservatively. It stores all values into RAM and never gets to use the r4 register, in order to "play safe" in relation to other calls in the program. Using RAM is extremely slow to the CPU.

Now using the '-O1' compiler option:

0000000000400660 <addArr>:
400660: 6b1f007f cmp w3, wzr --> w3 originally has 'length'
400664: 5400018d b.le 400694 <addArr+0x34>
400668: 51000466 sub w6, w3, #0x1
40066c: 910004c6 add x6, x6, #0x1
400670: d37ef4c6 lsl x6, x6, #2
400674: d2800003 mov x3, #0x0 // #0
400678: b8636825 ldr w5, [x1,x3]
40067c: b8636844 ldr w4, [x2,x3] --> r1 and r2 have the pointers to p1 and p2
400680: 0b0400a4 add w4, w5, w4
400684: b8236804 str w4, [x0,x3] --> r0 has pointer to p3
400688: 91001063 add x3, x3, #0x4
40068c: eb06007f cmp x3, x6
400690: 54ffff41 b.ne 400678 <addArr+0x18>
400694: d65f03c0 ret


We can see that it processes the results in a more direct way and minimally uses the RAM, while not allocating any stack. Still, the compiler wouldn't vectorize the code, as auto-vectorization is not enabled. In order to enable it, we either need to use -O3, which bundles various flags, or just by adding the single '-ftree-vectorize' flag when compiling.

Let's just try using the single flag, while on -O1:


0000000000400660 <addArr>:
400660: 6b1f007f cmp w3, wzr
400664: 540007ad b.le 400758 <addArr+0xf8>
400668: 91004004 add x4, x0, #0x10
40066c: eb04003f cmp x1, x4
400670: 1a9f37e6 cset w6, cs
400674: 91004025 add x5, x1, #0x10
400678: eb05001f cmp x0, x5
40067c: 1a9f37e5 cset w5, cs
400680: 2a0500c5 orr w5, w6, w5
400684: eb04005f cmp x2, x4
400688: 1a9f37e6 cset w6, cs
40068c: 91004044 add x4, x2, #0x10
400690: eb04001f cmp x0, x4
400694: 1a9f37e4 cset w4, cs
400698: 2a0400c4 orr w4, w6, w4
40069c: 6a0400bf tst w5, w4
4006a0: 54000460 b.eq 40072c <addArr+0xcc>
4006a4: 71000c7f cmp w3, #0x3
4006a8: 54000429 b.ls 40072c <addArr+0xcc>
4006ac: 53027c66 lsr w6, w3, #2
4006b0: 531e74c7 lsl w7, w6, #2
4006b4: 34000387 cbz w7, 400724 <addArr+0xc4>
4006b8: d2800004 mov x4, #0x0 // #0
4006bc: 2a0403e5 mov w5, w4
4006c0: 8b040028 add x8, x1, x4
4006c4: 4c407901 ld1 {v1.4s}, [x8]
4006c8: 8b040048 add x8, x2, x4
4006cc: 4c407900 ld1 {v0.4s}, [x8]
4006d0: 4ea08420 add v0.4s, v1.4s, v0.4s --> Vectorization!
4006d4: 8b040008 add x8, x0, x4
4006d8: 4c007900 st1 {v0.4s}, [x8]
4006dc: 110004a5 add w5, w5, #0x1
4006e0: 91004084 add x4, x4, #0x10
4006e4: 6b0600bf cmp w5, w6
4006e8: 54fffec3 b.cc 4006c0 <addArr+0x60>
4006ec: 1400000a b 400714 <addArr+0xb4>
4006f0: 937e7c85 sbfiz x5, x4, #2, #32
4006f4: b8656827 ldr w7, [x1,x5]
4006f8: b8656846 ldr w6, [x2,x5]
4006fc: 0b0600e6 add w6, w7, w6
400700: b8256806 str w6, [x0,x5]
400704: 11000484 add w4, w4, #0x1
400708: 6b04007f cmp w3, w4
40070c: 54ffff2c b.gt 4006f0 <addArr+0x90>
400710: 14000012 b 400758 <addArr+0xf8>
400714: 2a0703e4 mov w4, w7
400718: 6b07007f cmp w3, w7
40071c: 54fffea1 b.ne 4006f0 <addArr+0x90>
400720: 1400000e b 400758 <addArr+0xf8>
400724: 52800004 mov w4, #0x0 // #0
400728: 17fffff2 b 4006f0 <addArr+0x90>
40072c: 51000466 sub w6, w3, #0x1
400730: 910004c6 add x6, x6, #0x1
400734: d37ef4c6 lsl x6, x6, #2
400738: d2800003 mov x3, #0x0 // #0
40073c: b8636825 ldr w5, [x1,x3]
400740: b8636844 ldr w4, [x2,x3]
400744: 0b0400a4 add w4, w5, w4
400748: b8236804 str w4, [x0,x3]
40074c: 91001063 add x3, x3, #0x4
400750: eb06007f cmp x3, x6
400754: 54ffff41 b.ne 40073c <addArr+0xdc>
400758: d65f03c0 ret


We finally get vectorized operations! Although it does an awful lot of comparisons, probably to check alignment, resulting in this 64-line code.

As most modern processors align variables, we can pretty much tell the compiler to assume that they are, in fact, aligned. Following some techniques in this article, our function now looks like this:


addArr(int *__restrict p3, int *__restrict p1, int *__restrict p2, int length){
int i;
int *arr3 = __builtin_assume_aligned(p3, 16);
int *arr2 = __builtin_assume_aligned(p2, 16);
int *arr1 = __builtin_assume_aligned(p1, 16);

for(i=0; i<length; i++)
arr3[i] = arr1[i] + arr2[i];
}

The '__builtin_assume_aligned' function is hinting the compiler that the values are at least 16-byte aligned, and the '__restrict' keyword is telling it that the pointers are not overlapping into different regions of memory at runtime. All this allows the compiler to perform some optimizations, that result in this shorter code:


0000000000400660 <addArr>:
400660: 6b1f007f cmp w3, wzr
400664: 5400046d b.le 4006f0 <addArr+0x90>
400668: 53027c66 lsr w6, w3, #2
40066c: 531e74c5 lsl w5, w6, #2
400670: 340003c5 cbz w5, 4006e8 <addArr+0x88>
400674: 71000c7f cmp w3, #0x3
400678: 54000389 b.ls 4006e8 <addArr+0x88>
40067c: d2800004 mov x4, #0x0 // #0
400680: 2a0403e7 mov w7, w4
400684: 8b040008 add x8, x0, x4
400688: 8b04002a add x10, x1, x4
40068c: 8b040049 add x9, x2, x4
400690: 4c407941 ld1 {v1.4s}, [x10]
400694: 4c407920 ld1 {v0.4s}, [x9]
400698: 4ea08420 add v0.4s, v1.4s, v0.4s ---> Still vectorizing!
40069c: 4c007900 st1 {v0.4s}, [x8]
4006a0: 110004e7 add w7, w7, #0x1
4006a4: 91004084 add x4, x4, #0x10
4006a8: 6b0700df cmp w6, w7
4006ac: 54fffec8 b.hi 400684 <addArr+0x24>
4006b0: 1400000a b 4006d8 <addArr+0x78>
4006b4: 937e7c85 sbfiz x5, x4, #2, #32
4006b8: b8656827 ldr w7, [x1,x5]
4006bc: b8656846 ldr w6, [x2,x5]
4006c0: 0b0600e6 add w6, w7, w6
4006c4: b8256806 str w6, [x0,x5]
4006c8: 11000484 add w4, w4, #0x1
4006cc: 6b04007f cmp w3, w4
4006d0: 54ffff2c b.gt 4006b4 <addArr+0x54>
4006d4: 14000007 b 4006f0 <addArr+0x90>
4006d8: 2a0503e4 mov w4, w5
4006dc: 6b05007f cmp w3, w5
4006e0: 54fffea1 b.ne 4006b4 <addArr+0x54>
4006e4: 14000003 b 4006f0 <addArr+0x90>
4006e8: 52800004 mov w4, #0x0 // #0
4006ec: 17fffff2 b 4006b4 <addArr+0x54>
4006f0: d65f03c0 ret

Applying the same techniques to the 'addElems' function:


long addElems(int *__restrict p_arr , int length){

int i;
long sum = 0;
int *arr = __builtin_assume_aligned(p_arr, 16);

for (i=0; i<length; i++)
sum += arr[i];

return sum;
}

And we also get a vectorized and relatively short assembly code.

Conclusion

As GCC supports many different processors and architectures, it needs to maintain stability between these different platforms. This results in a lot of the optimizations not taking place automatically and leaves a lot of it up to the programmer to figure out.

Using vectorization can grant huge performance boosts and there are a myriad of existing software that don't take advantage of it, sometimes opting for multi-threaded solutions instead. With that said, there's a great untouched potential to improve these software with vectorization and that might lead for better auto-vectorization support on part of the compiler, and CPU manufacturers.


by Lucas Blotta (noreply@blogger.com) at March 22, 2017 06:37 PM

March 21, 2017


Badr Modoukh

DPS909 Lab 7 – Open Source Tooling and Automation

In lab 7 I was assigned the task to explore and learn about open source tooling and automation. I found this lab to be useful and learned some interesting things from it.

I did this lab by first creating a repository on GitHub. This repository will be expanded on in the coming weeks. It is a starting point to learning open source tools. I initialized the repository with a README.md file, added a .gitignore for node, and added a license using the MIT license.

Here is a link to my repository: https://github.com/badrmodoukh/Seneca2017LearningLab

After that I cloned the repository using:

git clone git@github.com:badrmodoukh/Seneca2017LearningLab.git

After the repository was cloned I initialized the package.json file using npm. This created the package.json file and included the information I set. The steps I took to accomplish this are:

  • npm init
  • entered the name, version, description, entry point, license

Once I created the package.json file I needed to create the node.js module in a JavaScript file called seneca.js.

I implemented the isValidEmail and formatSenecaEmail functions in this file. This is the end result of that file:

Screen Shot 2017-03-21 at 12.28.10 PM.png

The isValidEmail function checks to see if the passed in email is a valid Seneca email. The formatSenecaEmail function creates a Seneca email using the string that was passed into the function.

Once I finished implementing these functions I wanted to test my work and see if the functions did what they are suppose to do. In order for me to test my work I needed to write a simple command line tool that uses seneca.js.

I accomplished this task by following the steps done in this tutorial: Building command line tools with Node.js.

These are a summary of the steps I did:

  • created an index.js file
  • added a shebang (#!/usr/bin/env node) to the beginning of the file
  • added a bin section in the package.json file with a property key “seneca” and property value “./index.js” which is the script that will run the seneca.js module
  • used “commander” npm package to receive arguments
  • defined the options to use
  • installed shell command using npm install -g

Here is how the index.js file looks at the end:

Screen Shot 2017-03-21 at 12.48.08 PM.png

This enabled me to test my seneca.js module by running the commands:

“seneca -v <seneca email>” which checks if the Seneca email is valid

“seneca -f <name>” which creates a Seneca email with the given name.

“seneca –help” which displays the options available to use this command

Here is how this looks:

Screen Shot 2017-03-21 at 12.50.34 PM.png

After I tested the functions I implemented I needed to add ESLint to avoid common problems in my code.

I accomplished this by doing these steps:

  • npm install eslint –save-dev (which installs eslint and adds it to the development dependencies in the package.json file)
  • create a configuration for eslint using ./node_modules/.bin/eslint –init
  • selected Airbnb JavaScript rules
  • selected JSON format the eslint config file

This created a new file called .eslintrc.json.

After I configured eslint I ran it to check the seneca.js file and it showed me a couple of warnings that are “Unexpected unnamed function”.

I fixed this error by adding the function name to the function. ESLint Rules provides all the errors that can occur and how to fix them.

I added a script to the package.json file to always check my code when I make changes.

The final step I did was to use TravisCI. I thought using TravisCI will be a difficult task to accomplish but surprisingly it was very simple to add to the repository.

I followed the steps on Getting started to accomplish this task. I found these steps to be really easy to follow and clear.

Here is a summary of what I did to add TravisCI to my repository:

  • signing in to Travis CI using my GitHub account
  • Enabling Travis CI for my specific repository
  • created a .travis.yml file for a node project to tell Travis CI what to build

Here is the link to my TravisCI build: TravisCI build for lab

Finally at the end I added the Travis CI build badge by copying the code in the Travis CI website. I did this by following these steps Embedding Status Images.

I learned a lot from doing this lab and found it to be really useful and interesting. The tools that are used in this lab such as ESLint and Travis CI are used in many other open source projects and being able to understand how to add them to a project I found to be really useful. Now I understand how other open source projects add these tools to their repositories.

 


by badrmodoukh at March 21, 2017 05:27 PM


Timothy Moy

OSD600 Release 0.2: Update

It’s been a while, so I thought an update was in order. There is a new pull request which can be found here.

What’s New?

Changed the library that’s being used

Previously we were using CSSgram as the library to apply filters to the selfies. It worked well to show the images with filtering applied in the preview image using CSS attributes to change the specified image.

This use of CSS classes posed a small, but significant problem: we wouldn’t save the images with the filters applied.

Dave and Mike suggested that we use a canvas manipulation library to modify the image itself rather than change the way that it is displayed to the user. Caman.js is currently being used in the pull request above and the old one has been closed to prevent confusion.

Streamlined the code

In order to use Caman effectively, I looked for some examples and found this one which I based the currently implementation off of.

I got rid of the extra variables that they were using in their javascript implementation and added some code to the persist photo function in brackets so that it works fairly effectively without changing the rest of the code too much.

Updated the user interface

A few things were added to the interface like a new preview image for the Caman canvas as well as buttons for playing around with filters.

In the CSS file a few classes were changed to allow the user to see the new preview image and buttons via the overflow attribute and extra classes for the canvas.

Next Steps

Provide a  Cleaner Interface

As of now, the main focus is to clean up the interface so it isn’t so cluttered. This might involve merging the canvas that our interface is using with the Caman canvas, changing how filters are displayed, or some other suggestion that pops up.

Change Coding Patterns

It is probably possible to change the filter functions into listeners and merge several filter functions together. This will reduce the length of the code, but some specialized filters that require parameters might make it troublesome for maintenance.

Change the filters

I have chosen 7 filters which I thought would be useful for users, but having custom filters or using a different set of filters is not out of the question.

In Closing

This feature is almost done, but some fine tuning is needed to get it polished up to the standard that the teams wants. Stay tuned for more updates.


by Timothy Moy at March 21, 2017 04:02 AM


Len Isac

glibc difftime – no need for optimization

Upon further investigation, difftime can be left as is with no further optimization. Any optimization that can be done will have minimal effect in execution time. I will go over why that is.

double
__difftime (time_t time1, time_t time0)
{
  /* Convert to double and then subtract if no double-rounding error could
     result.  */

  if (TYPE_BITS (time_t) <= DBL_MANT_DIG
      || (TYPE_FLOATING (time_t) && sizeof (time_t) < sizeof (long double)))
    return (double) time1 - (double) time0;

  /* Likewise for long double.  */

  if (TYPE_BITS (time_t) <= LDBL_MANT_DIG || TYPE_FLOATING (time_t))
    return (long double) time1 - (long double) time0;

  /* Subtract the smaller integer from the larger, convert the difference to
     double, and then negate if needed.  */

  return time1 < time0 ? - subtract (time0, time1) : subtract (time1, time0);
}

For the first if condition, TYPE_BITS (time_t) and DBL_MANT_DIG are both constants, so the pre-processor will compare them at compile time and strip them from the executable altogether if they evaluate to true. The same applies to the second if condition. TYPES_BITS <= LDBL_MANT_DIG will be evaluated at compile time.

We can further validate this by compiling the code and looking at the assembly file:

I wrote a tester file that uses time.h's difftime.c:

// len_difftime_test.c
#include <stdio.h>
#include <time.h>
#include <limits.h>
#include <stdint.h>

int main(){
    // test time_t to uint_max conversion
    time_t time1 = time(NULL);
    time_t time0 = time(NULL) + 10;
    uintmax_t dt = (uintmax_t) time1 - (uintmax_t) time0;
    double delta = dt;
    printf("time1 = %d\ntime0 = %d\n", time1, time0);
    printf("(uintmax_t) time1 = %d\n", time1);
    printf("(uintmax_t) time0 = %d\n", time0);

    // test difftime function
    double result;
    result = difftime(time1, time0);
    printf("difftime(time1, time0) = %f\n", result);
    result = difftime(time0, time1);
    printf("difftime(time0, time1) = %f\n", result);

    return 0;
}

Compile:
gcc -g -o len_difftime_test len_difftime_test.c

I use gdb debugger to get to line 18 which makes the first call to difftime.
gdb len_difftime_test

Set a breakpoint at line 18 and run:

(gdb) b 18
Breakpoint 1 at 0x400638: file len_difftime_test.c, line 18.
(gdb) r
Starting program: /home/lisac/SourceCode/Seneca/spo600/project/src/glibc/time/len_difftime_test 
time1 = 1490051018
time0 = 1490051028
(uintmax_t) time1 = 1490051018
(uintmax_t) time0 = 1490051028

Breakpoint 1, main () at len_difftime_test.c:18
18      result = difftime(time1, time0);

Step into the difftime function:
__difftime (time1=1490051390, time0=1490051400) at difftime.c:103
103 {
(gdb) s
114     return (long double) time1 - (long double) time0;
(gdb) s
120 }

Short circuiting or test-reordering will not improve the executable since the pre-processor will rid of the comparison of constants when they evaluate to true. As we can see on line 17, there is no condition, only the returning subtract calculation.

Here is the pre-processor output:

cpp difftime.c

  if ((sizeof (time_t) * 8) <= 53 <-- removed
      || (((time_t) 0.5 == 0.5) && sizeof (time_t) < sizeof (long double))) <-- removed
    return (double) time1 - (double) time0;



  if ((sizeof (time_t) * 8) <= 64 || ((time_t) 0.5 == 0.5)) <-- removed
    return (long double) time1 - (long double) time0;

Now I will be looking into more functions that are better candidates for optimization.


by Len Isac at March 21, 2017 02:27 AM

March 20, 2017


Ray Gervais

Bramble Console = self.Console()

An OSD600 Contribution Update

This small post is an update to the Thimble Console implementation that I’ve been working on with the help of David Humphrey. I’m writing this at the time where a pull request is still being reviewed and extended as requested, which very well could be merged or approved-with the implied “Now do the UI” next step being assigned as well, while I write this post.

What’s Finished?

The backend, though still in need of flushing out of more specific functions including console.table, console.count, and console.trace. The basic console functions which include console.log, console.warn, console.info, console.error, console.clear, console.time, and console.timeEnd have all been implemented; each supporting multiple arguments -which was critical once the evaluated ‘needs’ of the console implementation were described, citing the importance of using multiple arguments to provide meaningful data and context to the console functions.

What’s Left?

User interface! The experience, which is also the main focus specifically; the access to the dedicated console without the need for developer tools or third party extensions. With the backend implemented and fleshed out to a releasable state, what is left falls down to the presentation layer, handling of said backend data, and the experience itself. Before, the only accessible means to viewing your console logs was through the developer tools, which included non-specific console data for the entire thimble instance, along with performance related logs, making access to the data you’re interested in borderline impossible at times. Furthermore, before the backend was implemented the console functions themselves referenced a random file which would be your editor’s current open file; though not an issue, it certainly was not clean or user friendly. Here’s an example taken from Safari’s Error Console:

Visual Ideas and Design Cues

Below, I’ve included a few console implementations, designs, or built in functions which I’d like to extend or take inspiration from:

Brackets Console Extension

This is a popular console extension for Brackets, which I’ve been advised to extend to work with Bramble seamlessly. With that, I’d change the typography and colors to better follow the standard bramble color scheme, and also modify the interface based on the requirements of Thimble.

Node Console

This comes from an Intellij plugin Nord-Syntax, which themes the IDE to use the Nord color pallet, one of which I am a fan of having recently discovered it. Simply put, while the console will not be togglable at the start, I’d personally advocate the use of the Nord color scheme, or even just a muted version of the Thimble color scheme which returns to the regular theme when interacted with; allowing the console to not intrude or become primary focus on the developer’s screen while programming until needed.

by RayGervais at March 20, 2017 03:03 PM


John James

SPO Project Update

Today I started to write out how I am gonna optimize my function strcspn, atm I’ve written a Tester and wrote my first version of the code. Atm, I’m trying to figure out how to avoid using a nested for loop, but it is a start none the less!

size_t STRCSPN_JJ(const char* str, const char* reject) {
const char *p, *q;

#ifdef __OPTIMIZE__
if (inside_main)
abort();
#endif

for (p = str; *p; p++) {
for (q = reject; *q; q++) {
if (*p == *q)
return p – str;

}
}

return -1;

}

Atm,. this is a lot slower than the gnu library function but I still feel like I can improve the function!

Here is what the tester I made had to say: entry three is my function

35f8fab10e12f09d73ee3dfa1b45bdf6.png

As you can see when we get up to a 900 array my function gets slower, while the other 2 stay as 0 milliseconds, mine went up to 2 milliseconds. So I’m still gonna have to figure out how I can fix this. I am hopeful to find a solution. At least on the upside, my function return the correct iteration!


by johnjamesa70 at March 20, 2017 05:43 AM

OSD 600 (Lab 7, creating a repo and the tools that are provided)

Overview:

This lab was to introduce all the tooling options my class and I have available to us from GitHub. We learned from having at Readme provided by git to using Travis-ci on making sure code we commit compile and being constantly tested

Questions:

Here are some question my professor wanted us to answer

  • what you did
    • We worked with many of the tooling options that GitHub provide us, such as creating a JSON package to being able to lint the code we write
  • what you learned
    • I learned how easy it is to set up a repo and implement proper practices such as using Travis-ci to make sure the code I submit will not break also the questionnaire JSON thing was the really cool thing to learn!
  • things you found interesting or difficult.
    • Most of this lab I found easy, the only hard part was working with lint on a windows computer, even after trying to do the commands on the command prompt, instead of git bash, it proved to be very challenging and frustrating. I am hoping to finish the lint part on my Linux laptop during the week.

My thoughts on the lab Parts

Creating a new Repo and cloning it:

This was an easy part, it just clicked and following the instruction. It was still cool that you can select to have a readme file, a .gitignore and a license right off the bat when creating a repo. This was definitely something really cool to learned and will be very helpful for me in the future

Initialize a new Node.js :

This part I thought was the coolest part of the lab, since we got to work with npm and have a question query to populate the JSON file, so you don’t have to create it manually (Extremely useful in my opinion). Definitely gonna remember the command  NPM help JSON for sure!

Create seneca.js:

This part was self-explanatory and was really easy. Not much to talk about this part

Add ESLint:

Well, when I started this part I thought “Piece of cake, this lab is easy”. Well, let’s just say I was dead wrong. it got to the part when it asked me how I would like to configure ESLint, and I could not move my arrow keys at all. So after asking my roommate Ray about he said to try to use the command prompt. I did and I could use the arrows, but for some reason, it would not save and cause even more problems. Still haven’t fixed this part yet

Automate Lint Checking:

This part was pretty simple just going in the JSON file and adding the test and where the lint is located

Use TravisCi:

This part was new territory like I have no clue what I am doing new. It was pretty cool to see how this website works with GitHub and how the testing works. Also enjoyed how fool proof it is and how much it can help any project.

Add Travis-Ci Badge to readme

This was pretty easy, just copy the markdown text provided by TravisCi and put it in the Readme file!

 

Overall:

I really enjoyed this lab, I learned a lot of cool tools that I have access to and learned why I should be using them constantly. I am looking forward to the next lab we have!

 


by johnjamesa70 at March 20, 2017 05:36 AM


Ray Gervais

Creating a NodeJS Driven Project

OSD600 Week Nine Deliverable

Introduction

For this week, we were introduced to a few technologies that though interacted with during our contributions and coding, were never described or explained the ‘why’, ‘how’, or even the ‘where to start’ aspects. The platforms on trial? Node, Travis CL and even ESLint -curse you linter, for making my code uniform.

Init.(“NodeJS”);

The first process was simply creating a repository on GitHub, cloning it onto our workstations, and then letting the hilarity of initializing a new NodeJS module occur. Why do I cite such humour for the later task? Because I witnessed few forget which directory they were in, thus initializing Node in their Root, Developer, You-Name-It folder; anything but their repository’s cloned folder. Next, was learning of what you could, or could not, input into the initialization commands. Included below is the example script which was taken from Dave’s README.md which showed how the process should look for *Nix users. Window’s users had a more difficult time, having to use their Command Prompt instead of their typical Git Bash terminal which would fail to type ‘yes’ into the final step.

$ npm init

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.

Press ^C at any time to quit.
name: (Seneca2017LearningLab) lab7
version: (1.0.0) 1.0.0
description: Learning Lab
entry point: (index.js) seneca.js
test command:
git repository: (https://github.com/humphd/Seneca2017LearningLab.git)
keywords:
author:
license: (ISC) MIT
About to write to /Users/dave/Sites/repos/Seneca2017LearningLab/package.json:

{
  "name": "lab7",
  "version": "1.0.0",
  "description": "Learning Lab",
  "main": "seneca.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/humphd/Seneca2017LearningLab.git"
  },
  "author": "",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/humphd/Seneca2017LearningLab/issues"
  },
  "homepage": "https://github.com/humphd/Seneca2017LearningLab#readme"
}

Is this ok? (yes)

Creating The Seneca Module

The next step was to create the seneca.js module, which would be expanded upon in further labs. For now, we had to write two simple isValidEmail and formatSenecaEmail functions respectively. This task took minutes, thanks to W3 School’s email validation regular expression, which along with my code, is included below. The bigger challenge, was getting ESLint to like my code.

/**
 * Given a string `email`, return `true` if the string is in the form
 * of a valid Seneca College email address, `false` othewise.
 */
exports.isValidEmail = function(email) {
  if (/^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$/.test(email)) {
  	return (true);
  }
  return (false);
};

/**
 * Given a string `name`, return a formatted Seneca email address for
 * this person. NOTE: the email doesn't need to be real/valid/active.
 */
exports.formatSenecaEmail = function(name) {
  name.trim();
  return name.concat('@myseneca.ca');
};

Depending On ESLint

ESLint, up to this point I had only dealt with in small battles, waged on the building process of Brackets where my code was put against its rules. Now, I am tasked with instead of conquering it (in the case of a developer, meaning to write code which complies with the preset rules), but creating the dependency which will build it into my development environment of the project. Installing ESlint requires the following command, followed by the initialization which will allow you to select how you’d like the linter to function along with style guides. The process that we followed is below.

$ npm install eslint --save-dev

$ ./node_modules/.bin/eslint --init

? How would you like to configure ESLint?
  Answer questions about your style
❯ Use a popular style guide
  Inspect your JavaScript file(s)

? Which style guide do you want to follow?
  Google
❯ Airbnb
  Standard

? Do you use React? (y/N) N

? What format do you want your config file to be in?
  JavaScript
  YAML
❯ JSON

Running Eslint manually would involve running $ ./node_modules/.bin/eslint , which could then be automated by adding the following code to the package.json file.

"scripts": {
  "lint": "node_modules/.bin/eslint *.js",
  "test": "npm run -s lint"
}

This would allow for one to call linting at any time, with the npm command followed by “lint” in this case.

Travis CI Integration

When writing the next evolutionary script, program, even website for that matter, you want to ensure that it works, and once it does ‘work’, you double check on a dedicated platform. That’s where the beauty which is Travis CI comes to play, allowing for automated tested (once properly configured) of your projects and repositories. We were instructed to integrate Travis with this exercise with Dave’s provided instructions below.

Now that we have the basics of our code infrastructure set up, we can use a continuous integration service named Travis CI to help us run these checks every time we do a new commit or someone creates a pull request. Travis CI is free to use for open source projects. It will automatically clone our repo, checkout our branch and run any tests we specify.

  • Sign in to Travis CI with your GitHub account
  • Enable Travis CI integration with your GitHub account for this repo in your profile page
  • Create a .travis.yml file for a node project. It will automatically run your npm test command. You can specify “node” as your node.js version to use the latest stable version of node. You can look at how I did my .travis.ymlfile as an example.

Push a new commit to your repo’s master branch to start a build on Travis. You can check your builds at https://travis-ci/profile//. For example, here is my repo’s Travis build page: https://travis-ci.org/humphd/Seneca2017LearningLab

Follow the Getting started guide and the Building a Node.js project docs to do the following:

Get your build to pass by fixing any errors or warnings that you have.

Once that was complete, the final step was to integrate a Travis CI Build Badge into the README of our repository. This final step stood out to me, for I had seen many of these badges before without prior knowledge as to their significance. Learning how Travis CI could automate the entire integration testing of your project on a basic Ubuntu 12.04 (if configured to that) machine within minutes has opened my eyes up to a new form of development testing, implementation, and more open-source goodness. The final repository with all that said and done can be found for the curious, here.

by RayGervais at March 20, 2017 01:26 AM

March 19, 2017


Len Isac

Open Source Tooling and Automation

Here I will demonstrate an example of using various open source tooling and automation on a GitHub repository.

Create repo to test

Initial commit for new test repository includes:

  • README file
  • .gitignore for Node
  • MIT license

Initialize npm package.json file

Since I have nodejs installed on my machine, I can go ahead and pull the newly created repository to my local machine.

git pull git@github.com:lkisac/OpenSourceToolingAutomation.git

Initialize the package.json file:
npm init

{
  "name": "lab7",
  "version": "1.0.0",
  "description": "Open Source Tooling and Automation",
  "main": "seneca.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/lkisac/OpenSourceToolingAutomation.git"
  },
  "author": "",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/lkisac/OpenSourceToolingAutomation/issues"
  },
  "homepage": "https://github.com/lkisac/OpenSourceToolingAutomation#readme",
  "bin": {
    "seneca": "./seneca.js"
  },
  "dependencies": {
    "commander": "^2.9.0"
  }
}

Implement JavaScript functions

/**
 * Given a string `email`, return `true` if the string is in the form
 * of a valid Seneca College email address, `false` othewise.
 */
exports.isValidEmail = function(email) {
    // TODO: needs to be implemented
};

/**
 * Given a string `name`, return a formatted Seneca email address for
 * this person. NOTE: the email doesn't need to be real/valid/active.
 */
exports.formatSenecaEmail = function(name) {
    // TODO: needs to be implemented
};

First attempt at implementing stub functions (this will be improved later on using ESLint). Implementation also includes the use of npm’s commander library to be able to pass command line options to the script.

You can set bin environment variables in the package.json file to execute a script:

  "bin": {
    "seneca": "./seneca.js"
  },

Build package.json:

npm install -g

Run script from command line:

$ seneca -v lkisac@myseneca.ca
email: lkisac@myseneca.ca
valid

$ seneca -v lkisac@gmail.com
email: lkisac@gmail.com
invalid

$ seneca -f lkisac
name: lkisac
lkisac@myseneca.ca

$ seneca -v lkisac@gmail.com -f lkisac
email: lkisac@gmail.com
invalid
name: lkisac
lkisac@myseneca.ca

Code works as expected, although it needs some clean up. In the next section, I will show how ESLint can assist in the clean up process.

Clean code w/ ESLint

Install and configure ESLint to validate our coding style:

npm install eslint --save-dev

--save-dev option adds configuration as development dependency (developing code vs. using code).

For this example, ESLint is configured with Airbnb styleguide, No React, and in JSON format.
./node_modules/.bin/eslint --init

	Installing eslint-plugin-import, eslint-config-airbnb-base
	lab7@1.0.0 C:\github\OpenSourceToolingAutomation
	+-- eslint-config-airbnb-base@11.1.1
	`-- eslint-plugin-import@2.2.0
	  +-- builtin-modules@1.1.1
	  +-- contains-path@0.1.0
	  +-- doctrine@1.5.0
	  +-- eslint-import-resolver-node@0.2.3
	  +-- eslint-module-utils@2.0.0
	  | +-- debug@2.2.0
	  | | `-- ms@0.7.1
	  | `-- pkg-dir@1.0.0
	  +-- has@1.0.1
	  | `-- function-bind@1.1.0
	  +-- lodash.cond@4.5.2
	  `-- pkg-up@1.0.0
	    `-- find-up@1.1.2
	      `-- path-exists@2.1.0
	
Successfully created .eslintrc.json file in C:\github\OpenSourceToolingAutomation

Now I can run newly configured eslint on the JavaScript file seneca.js:

./node_modules/.bin/eslint seneca.js

Working with warnings/errors

First, there were many linebreak-style issues with the error message: “Expected linebreaks to be ‘LF’ but found ‘CRLF'”. I fixed this by running dos2unix seneca.js to convert line endings to Unix format.

Other warnings/errors included:

  • Unexpected var, use let or const instead
  • Strings must use singlequote
  • Missing space before function parentheses

To organize these fixes properly, I grouped similar issues together:

i.e. for the Unexpected var, use let or const instead error, I ran:

./node_modules/.bin/eslint seneca.js | grep 'Unexpected var, use let or const instead'

Once each line containing that issue was fixed, I commit the fix to GitHub. This will make each commit clearer and specific instead of all issues crammed together into one commit.

History for fixes (pre-pended by “fixed “).

ESLint is extremely useful to get your code to match a specific style. The config file is customizable, so any project can contain its own settings. This can help contributors follow a specific standard for a given project.

Add Travis CI to repository

Following the getting started guide, I set up my Travis account by syncing my existing GitHub account.
You can customize your .travis.yml file for a particular language. List of languages is provided here.

language: node_js
node_js:
  - "6"
install:
  - npm install
script:
  - npm test

You can also validate your yml file here by providing a link to your repository (containing the yml file), or by pasting your yml file into the textbox provided.

Example

To keep track of your repository’s build status you can add a “build badge” to your repository.

Travis CI is used in almost all GitHub open source projects. Anytime you submit a pull request, it must pass one or more Travis builds.


by Len Isac at March 19, 2017 11:59 PM


Arsalan Khalid

Contributing towards EthereumJ : Release 1

Hello Open Source Family!

So I’ve finally made some progress, and completed my first pull request or ‘release’ for the Ethereum J project: https://github.com/ethereum/ethereumj

I have to say it was a pretty cool experience, to finally say that I’ve contributed to an open source project. I mean it’s not like my PR has been merged yet, but it’s pretty surreal. I’ve been working on enterprise projects, writing ‘private grade’ code for some time, most of the procedures and best practices were drilled into me real hard! I probably owe that last part to my great mentor, Mr. Andrew Morgan. He was my mentor at my first long term internship (for 16 months), and carried on to become one of my closest friends and colleagues at my new company Accenture.

Nonetheless, I have to say the experience is really different compared to contributing towards company/enterprise projects. Even if they’re supposedly ‘open-source’ or open to the public. There are so many restrictions on best practices, how you should write commits, people’s obvious opinions on merging or rebasing, and much much more. I could probably go on a long spiel about this, but I think you fellow readers get the picture. Point is, this open source dev stuff is pretty cool, you can literally just work on whatever you want, get involved with the community with all the new tools out there like Slack, Gitter, and issue trackers. But I think the fundamental thing that you can take away from this, is that writing code for work can never give a few things, that drive and enjoyment towards contributing on something out of your own merit and motivation. Usually, there’s always a boss man in the professional world sort of directing you towards things you need to do, or things that need to be done, even if you have the utmost autonomy in the world.

Now, I know I went on a pretty invigorating run-down above, but in reality the PR I submitted isn’t too glamorous….
https://github.com/ethereum/ethereumj/pull/766

But still proud! It starts with the little things… I still remember writing my first Hello World program (was hard to find, but I found a picture of it):

I remember sending my brother a screen shot of my first footstep, as he always already working at Microsoft as a PM by then!

Anyways, let me talk about why I chose this issue and the progress I made. So first off I found this task in the issue list: https://github.com/ethereum/ethereumj/issues/706

I figured since it was only a re-factor task it would be an easy way to get started, introduce myself to the community, and something to get familiar with the architecture/design of the project. I think it did a great job for me to learn how to set the project up, test, build, and import the Blockchain.

I learned some really really cool things in the process, even about Blockchain from a technical perspective that I didn’t know before. First off, a node doesn’t really have much of a connection to a wallet, it’s essentially hardware that CAN be used as a distributed system to assist in the mining of transactions (full node), or simply be a node which owns a copy of the Bockchain (half-node). I know anyone would think that’s common-sense, especially within this industry, but it’s still something you don’t really clearly understand until you see a node actually running! Through this node, you can also do something very important, which is smart contracts! I’ll stay away from going into too much detail about what smart contracts are, but point being you can essentially utilize the full power of the Ethereum platform and Blockchain,by being able to run a node and execute contracts. A quick note would be that these contracts can interestingly be run on both the test-net for free, or with a fee/cost on the Ethereum platform.

Furthermore, I would also add that I got to see all of the things I’ve learned in the past few years just happening on their own habitually. For example, before committing, I knew to diff my commit just to make sure that it was exactly what I wanted. When writing my commit message I was keen on ensuring the description was clear and outlined exactly what the change is. Most importantly, I learned how to make an object immutable within a Java stack, which is incidentally something I’ve never done before!

All in all, this was a pretty cool experience. I’d say I learned many things, even though the time it took to code up this bad boy wasn’t entirely daunting, it was the aspects around it that made it pretty educational!

Stay tuned, more releases/PR’s to come readers! In the mean time I’ll keep Blockchaining and Open Sourcing…(if that’s a word).

Cheers!
Arsalan

by Arsalan Khalid at March 19, 2017 08:12 PM


Dang Khue Tran

OSD600: LAB 7: Open Source Tooling and Automation

In this week OSD600 lab, I got to have a chance to learn how to use open source tooling, automation and workflows. Specifically, I learned how to use ESLint, Travis CI and initializing a node module.

Create a Repo

First thing I did was creating a new repo on Github. Previously, most of my time using Git is cloning repo and creating a repo from IDEs like Android Studio. This time I created the repo via Github webpage and now I learned that Github provided us with a default list of file to ignore (.gitignore) based on what you are going to do with the repo and add a license to the repo. Cloning the repo was simple enough and it didn’t cause any problem.

Initialize a Node.js module

Next is initializing a Node.js module, npm init was the command. Node Package Manager will walk you through the steps to enter the information for your module, then a package.json file will be generated that contains the information you just entered.

Setup ESLint

I installed ESLint as a project dependency with the command: npm install eslint --save-dev

To run the eslint executable, if you only install it for this module, it is in the .bin folder inside the node_module folder of your project.

Run eslint --init to setup the style guide that you would like to apply to your project code. I picked “Use a popular style guide” and “Airbnb” style guide, no for React and JSON for the config file. Then, some npm nodules will be installed and generate a .eslintrc.json file containing the rules I setup.

To validate your code, run eslint and it will shows the errors or warning after it has looked through your code.

To automate Lint Checking, I modified the package.json file to have the scripts property like this:

"scripts": {
  "lint": "node_modules/.bin/eslint *.js",
  "test": "npm run -s lint"
}

I also know that there are extensions for editors to use eslint to display directly on our code just like IntelliSense. So I tried that with Atom and it is really convenient and intuitive.

Using Travis CI

Setting up Travis CI to run and test your code is more simple than I thought it would be.

Signing in with a Github account is good enough to be able to use Travis CI. Then turn on the “switch” for the repo that you want Travis to run every time there is an commit or pull request.

Next, you need to create the .travis.yml on your project root to tell Travis how to run your code. For this case, I had my config like this:

language: node_js
node_js:
  - "node"

Conclusion

What I did in this post was pretty simple in my opinion. I already have heard a lot about these tools but never have time trying to learn how to use the tools. I am glad that my professor had a guide for this and easy to follow. It really helps to kick start my learning about these tools for more use cases.


by trandangkhue27 at March 19, 2017 04:51 AM


Ray Gervais

Optimizing Glibc’s SegFault

SPO600 Project Specifications and Concepts

Segmentation Fault (Core Dumped) is a phrase that many know all too well, so much so that some developers such as yours truly was even granted the pleasurable nickname of ‘segfault’ during their first year at Seneca College. So, when tasked with the intention of optimizing a function or few from the GNU C Library (GLibc for short), I thought I may as well play a hand in ruining other programmer’s days as well. Seeing that segfault() existed in this library lit up my eyes to mischievous intents and melancholy memories, but I knew I wanted to take a crack at improving it.

Diving Into the Code

Cracking open the segfault.c file located in the debug folder with Vim introduced me to a 210 lined source code which included many define-styled tags and includes. After looking over the license and setup (includes, defines), was some of the most amazing code I had read in the past month. Equally readable, to the point and robust, I was impressed with what this offered compared to many other functions I had looked into which though not horribly written, was not human-friendly in any way. A great example of such code is the very first function written, which looks like the following:

/* We better should not use `strerror' since it can call far too many
   other functions which might fail.  Do it here ourselves.  */
static void
write_strsignal (int fd, int signal)
{
  if (signal < 0 || signal >= _NSIG || _sys_siglist[signal] == NULL)
    {
      char buf[30];
      char *ptr = _itoa_word (signal, &buf[sizeof (buf)], 10, 0);
      WRITE_STRING ("signal ");
      write (fd, buf, &buf[sizeof (buf)] - ptr);
    }
  else
    WRITE_STRING (_sys_siglist[signal]);
}

This function does not look like any optimizations can be applied which would benefit it beyond what is already there. Instead, I think a function which has much more potential for optimizations is the following:

/* This function is called when a segmentation fault is caught.  
 The system is in an unstable state now.  
 This means especially that malloc() might not work anymore.  */
static void
catch_segfault (int signal, SIGCONTEXT ctx)
{
  int fd, cnt, i;
  void **arr;
  struct sigaction sa;
  uintptr_t pc;

  /* This is the name of the file we are writing to.  If none is given
     or we cannot write to this file write to stderr.  */
  fd = 2;
  if (fname != NULL)
    {
      fd = open (fname, O_TRUNC | O_WRONLY | O_CREAT, 0666);
      if (fd == -1)
    fd = 2;
    }

  WRITE_STRING ("*** ");
  write_strsignal (fd, signal);
  WRITE_STRING ("\n");

#ifdef REGISTER_DUMP
  REGISTER_DUMP;
#endif

  WRITE_STRING ("\nBacktrace:\n");

  /* Get the backtrace.  */
  arr = alloca (256 * sizeof (void *));
  cnt = backtrace (arr, 256);

  /* Now try to locate the PC from signal context in the backtrace.
     Normally it will be found at arr[2], but it might appear later
     if there were some signal handler wrappers.  Allow a few bytes
     difference to cope with as many arches as possible.  */
  pc = (uintptr_t) GET_PC (ctx);
  for (i = 0; i < cnt; ++i)
    if ((uintptr_t) arr[i] >= pc - 16 && (uintptr_t) arr[i] <= pc + 16)
      break;

  /* If we haven't found it, better dump full backtrace even including
     the signal handler frames instead of not dumping anything.  */
  if (i == cnt)
    i = 0;

  /* Now generate nicely formatted output.  */
  __backtrace_symbols_fd (arr + i, cnt - i, fd);

#ifdef HAVE_PROC_SELF
  /* Now the link map.  */
  int mapfd = open ("/proc/self/maps", O_RDONLY);
  if (mapfd != -1)
    {
      write (fd, "\nMemory map:\n\n", 14);

      char buf[256];
      ssize_t n;

      while ((n = TEMP_FAILURE_RETRY (read (mapfd, buf, sizeof (buf)))) > 0)
    TEMP_FAILURE_RETRY (write (fd, buf, n));

      close (mapfd);
    }
#endif

  /* Pass on the signal (so that a core file is produced).  */
  sa.sa_handler = SIG_DFL;
  sigemptyset (&sa.sa_mask);
  sa.sa_flags = 0;
  sigaction (signal, &sa, NULL);
  raise (signal);
}

Optimization Ideas

Below are some of my notes, and observations which may lead to optimizations that may benefit the function. Further research will have to be conducted before I could attempt to improve the codebase, for segfault.c suffers similar faults as much of the functions, highly optimized programming.

Loop Unrolling

  • Line# 109 of ~/debug/segfault.c: PC calculations can occur before the loop itself.

Loop / Variable Unswitching

  • Line# 152 of ~/debug/segfault.c: *name is not used till line 185.
  • Line# 74 of ~/debug/segfault.c: i is not used till line 108.

These are minor optimizations, and as I discover more I’ll append them to the next blog post which covers this topic, backward-linking to this post.

by RayGervais at March 19, 2017 02:44 AM

March 18, 2017


Kevin Ramsamujh

OSD600 Thimble Release 0.2: Remixing from projects list

For this release for Thimble, I chose to add another feature to the projects list page. The ability to remix projects straight from the projects list was requested and this seemed right up my alley. The most difficult part of working on this feature was simply getting a firm understanding of how remixing a project was already handled. The first place I checked was the html file for the remix bar which revealed the URL that was used to remix a project. This is where I made my biggest mistake; In the URL to remix a project, an ID is used and I assumed that this was the Id of the project found in the project list but after lots of unexpected results and debugging I realized that this Id is actually the Id of the published version of the project.

Since the Id of the published version of the project is not an attribute that I was able to access, I had to use the publish_url property and substring it to extract the published Id. Using the debugger in my browser helped a lot with this as I was able to see what the published URL was and from that I was able to figure out how to obtain the Id. Since the URL only consisted of the host/user/publishId I figured I could simply check for the second occurrence of the “/” in the URL after the “https://&#8221; and substring the rest of it to get the published Id. Finally all that was needed was to launch the remix URL with the published Id in the same window and tab which I was able to accomplish using window.open().

The last step to getting this feature done was styling the button. I tried my best to follow the sizing and colors of the delete button that was already there. I also found a remix icon in the resources so I decided to add that in as well. I think having the icon alongside the text could also help with the confusion in vocabulary.

All in all this was once again a fun feature to work on for Thimble that created a lot more frustration and headache for me than it should have. I found that running into errors that you have never seen before can greatly affect the difficulty of the bug or feature you are working on. I found myself completely confused sometimes because I didn’t know why I was getting some errors and this is where I realized that having an online community that I could go to to ask questions is extremely useful. For my next releases I plan to reach out more when I run into these problems to help reduce the headache.


by kramsamujh at March 18, 2017 10:18 PM


Len Isac

Makefile rules & recipes

Recipes for the rules defined in your Makefiles require specific indentation. Each line in a recipe (i.e. “tests” below) must start with a tab character.

You can run:

cat -n -E -T Makefile
option description
n Show line numbers
e equivalent to -vE
t equivalent to -vT
E, –show-ends display $ at end of each line
T, –show-tabs display TAB characters as ^I
v, –show-nonprinting use ^ and M- notation, except for LFD and TAB

Which produces something like:

include ../Makeconfig$
$
headers := time.h sys/time.h sys/timeb.h bits/time.h^I^I^I\$
^I   bits/types/clockid_t.h bits/types/clock_t.h^I^I^I\$
^I   bits/types/struct_itimerspec.h^I^I^I^I\$
^I   bits/types/struct_timespec.h bits/types/struct_timeval.h^I\$
^I   bits/types/struct_tm.h bits/types/timer_t.h^I^I^I\$
^I   bits/types/time_t.h$
$
routines := offtime asctime clock ctime ctime_r difftime \$
^I    gmtime localtime mktime time^I^I \$
^I    gettimeofday settimeofday adjtime tzset^I \$
^I    tzfile getitimer setitimer^I^I^I \$
^I    stime dysize timegm ftime^I^I^I \$
^I    getdate strptime strptime_l^I^I^I \$
^I    strftime wcsftime strftime_l wcsftime_l^I \$
^I    timespec_get$
aux :=^I    era alt_digit lc-time-cleanup$
$
tests := test_time clocktest tst-posixtz tst-strptime tst_wcsftime \$
^I   tst-getdate tst-mktime tst-mktime2 tst-ftime_l tst-strftime \$
^I   tst-mktime3 tst-strptime2 bug-asctime bug-asctime_r bug-mktime1 \$
^I   tst-strptime3 bug-getdate1 tst-strptime-whitespace tst-ftime \$
^I   tst-tzname$
$

Where ^I represents a tab character and $ represents a newline character. You can use this to check for valid tab and newline indentation in your recipes in case you run into this error: *** missing separator. Stop.
If you’re using vi, make sure to use :set noet to disable replacement of tabs with a tabwidth set number of spaces.


by Len Isac at March 18, 2017 06:47 PM


Ray Gervais

Writing Inline Assembly in C

SPO600 Deliverable Week Seven

For this exercise, the task was described in the following way, “Write a version of the Volume Scaling solution from the Algorithm Selection Lab for AArch64 that uses the SQDMULH or SQRDMULH instructions via inline assembler”. Though this sounds rather complex to the average programmer, I can assure you that it’s easier to delegate or assign such as task than it is to actually implement if you do not live in a Assembly-centric world. Luckily, this was a group lab so I have to credit the thought process, the logic officers, the true driving force which would lead to the completion of said lab, Timothy Moy and Matthew Bell. Together, we were able to write inline assembly which completed the requirements on an AArch64 system.

The Assembly Process

Multiple implementations were brought about by the group, some struggling to compile and others segfaulting as soon as the chance arose. One finally exclaimed promise, and all attention was shifted to perfecting it, which the final version of can be seen below. We modified the custom algorithm in the previous exercise with the inline assembly code, and recorded an improved performance metric compared to the naive C function.

#include 
#include 
#include time.h>

#define SIZE 1000000000
#define VOL 0.75

int16_t output[SIZE];
int16_t data[SIZE];

struct Result{
    double elapsed, elapsed_middle;
    struct timeval time_start,time_end;
    double sum;
};

struct Result sum_naive();

struct Result sum_custom();

int main(){
    int i = 0;
    struct Result r1, r2, r3, r4;
    char* fname = "input_data";
    FILE* fp = fopen(fname, "r");

    if(fp == NULL){
        return 1;
    }
    printf("Reading from '%s'\n", fname);

    / Read the file
    fread(&data, 2, SIZE, fp);    
    fclose(fp);

    printf("Finished reading!\n");

    printf("Testing naive sum...\n");

    r1 = sum_naive();

    printf("Done!\n");
    printf("Testing custom sum...\n");

    r2 = sum_custom();

    printf("Done!\n");

    r1.elapsed = (r1.time_end.tv_sec - r1.time_start.tv_sec) * 1000 + (r1.time_end.tv_usec - r1.time_start.tv_usec) / 1000;    
    r2.elapsed = (r2.time_end.tv_sec - r2.time_start.tv_sec) * 1000 + (r2.time_end.tv_usec - r2.time_start.tv_usec) / 1000;

    printf("Naive Sum: %.5f Time: %.8f\n", r1.sum, r1.elapsed);
    printf("Custom Sum: %.5f Time: %.8f\n", r2.sum, r2.elapsed);

    printf("Naive Time Difference: %.5f\n", r2.elapsed - r1.elapsed);
    return 0;
}


struct Result sum_naive(){
    size_t i;
    struct Result res;
    int16_t val;

    gettimeofday(&res.time_start, NULL);
    res.sum = 0;
    for(i = 0; i < SIZE;i++){
        val = data[i] * VOL;
        output[i] = val;
            res.sum += val;
        }
        gettimeofday(&res.time_end, NULL);

    return res;
}

struct Result sum_custom() {
     int i;
     struct Result res;
     int16_t table[0xFFFF];
     int idx;
     register int16_t volint asm("r20");
     int16_t* p; 

     gettimeofday(&res.time_start, NULL);
     res.sum = 0;
     volint = VOL * 32767;

     for(p = output; p < output + sizeof(int16_t) * SIZE;){
         __asm__ ("LD1 {v0.8h}, [%0];  \
            DUP v1.8h, w20; \ 
            SQDMULH v0.8h, v0.8h, v1.8h; \  
            ST1 {v0.8h}, [%0]"
            : //no output
            : "r"(p),"r"(volint) //register holding pointer (refer as %0), then volint register (refer as %1)
            :
         );
        p += 16;
     }
 
    gettimeofday(&res.time_end, NULL);
    return res;
}

Looking back now at the code, I can see where we neglected compiler-performant optimizations such as removal of common loop calculations which may better the performance of the custom algorithm and also reduce multiplication operations. Furthermore the source code was littered with commented out implementations which I have removed from the above, proving that we a class and myself as a developer still have no basic understanding of Assembly.

We also noted during the closing of this exercise that the custom sum did not work properly. Still that was not the focus of the lab so we pressed on. Curious, I did a few changes to optimize the items that were mentioned above to see if there was a performance increase. The new result is below, which effectively shaved off 1.13 seconds the original custom algorithm’s runtime. The biggest change which I’ve included below is simply modifying line 89 to compare the following variable (which was created on line 88) to p instead of doing the calculation of output + sizeof(int16_t) * SIZE.

int16_t sizeComparator = output = sizeof(int16_t) * SIZE;
for(p = output; p < sizeComparator;) {
   DUP v1.8h, w20; \
            SQDMULH v0.8h, v0.8h, v1.8h; \  
            ST1 {v0.8h}, [%0]"
            : //no output
            : "r"(p),"r"(volint) //register holding pointer (refer as %0), then volint register (refer as %1)
            :
         );
        p += 16;
}

Finding Assembly in Ardour

For the second part for this lab, we had to observe why inline assembly was used in one of the listed open source projects, and the musicians in me was too curious to pass the opportunity to look into Ardour’s source code. Ardour, is the definitive linux project aimed at recording, mixing and even light video editing. It is the Pro Tools of the open source world, the FOSS audio producers dream. I have not kept up to date with it’s recent developments, having played with version 2.* on my makeshift Ubuntu Studio workstation years ago.

Using GitHub’s ‘search in repository’ feature, a quick search for ‘asm’ led to 40 results, which along with the code base itself, can be seen with the following link. For this analysis, I will focus on the first two unique results which, span two files; the first being is found in ‘~/msvc_extra_headers/ardourext/float_cast.h.input’ and the later being found in ‘libs/ardour/ardour/cycles.h’.

Float_Cast.h.input Analysis

Opening the file displays this description first, which helps to understand the purprose of said file and answer a few questions such as operating system, cpu architecture targets and configurations:

/*============================================================================
** On Intel Pentium processors (especially PIII and probably P4), converting
** from float to int is very slow. To meet the C specs, the code produced by
** most C compilers targeting Pentium needs to change the FPU rounding mode
** before the float to int conversion is performed.
**
** Changing the FPU rounding mode causes the FPU pipeline to be flushed. It
** is this flushing of the pipeline which is so slow.
**
** Fortunately the ISO C99 specifications define the functions lrint, lrintf,
** llrint and llrintf which fix this problem as a side effect.
**
** On Unix-like systems, the configure process should have detected the
** presence of these functions. If they weren't found we have to replace them
** here with a standard C cast.
*/

/*
** The C99 prototypes for lrint and lrintf are as follows:
**
** long int lrintf (float x) ;
** long int lrint (double x) ;
*/

The file itself seems to have functions which all call the same asm code, and returns different cast variables. The assembly code is below this paragraph and may differ throughout the file, out of the scope of my analysis and current window’s code.

_asm
   { fld flt
     fistp intgr
     } ;

FLD Instruction

The fld instruction loads a 32 bit, 64 bit, or 80 bit floating point value onto the stack. This instruction converts 32 and 64 bit operand to an 80 bit extended precision value before pushing the value onto the floating point stack. (University of Illinois)

FISTP Instruction

The fist and fistp instructions convert the 80 bit extended precision variable on the top of stack to a 16, 32, or 64 bit integer and store the result away into the memory variable specified by the single operand. These instructions convert the value on tos to an integer according to the rounding setting in the FPU control register (bits 10 and 11). As for the fild instruction, the fist and fistp instructions will not let you specify one of the 80×86’s general purpose 16 or 32 bit registers as the destination operand.

The fist instruction converts the value on the top of stack to an integer and then stores the result; it does not otherwise affect the floating point register stack. The fistp instruction pops the value off the floating point register stack after storing the converted value. (University of Illinois)

What This All Means

Due to the lack of support for the *lrint* and *rint* functions on WIN32, they had to be implemented here for proper operation of the program. Once handed a floating point value in the case of the entire function outlined below, the asm code handles converting (or casting in the case of native C code perhaps) the float to an integer; the converted variable being stored in the specified register.

__inline long int
lrintf (float flt)
{    int intgr;
    _asm
    {    fld flt
        fistp intgr
        } ;

    return intgr ;
}

Cycles.h Analysis

Opening this file gave another explanation of it’s purpose at the top, a standard among many of the files here and one that I hope to use in my own future projects:

/*
* Standard way to access the cycle counter on i586+ CPUs.
* Currently only used on SMP.
*
* If you really have a SMP machine with i486 chips or older,
* compile for that, and this will just always return zero.
* That's ok, it just means that the nicer scheduling heuristics
* won't work for you.
*
* We only use the low 32 bits, and we'd simply better make sure
* that we reschedule before that wraps. Scheduling at least every
* four billion cycles just basically sounds like a good idea,
* regardless of how fast the machine is.
*/

The file itself seems to be an interface between the cycle counter and the CPU architecture, attempting to support where it can the different architectures with the same scheduling platform.

#define rdtscll(lo, hi) \
__asm__ __volatile__("rdtsc" : "=a" (lo), "=d" (hi))

__ASM__ __VOLATILE__ Analysis

The typical use of extended asm statements is to manipulate input values to produce output values. However, your asm statements may also produce side effects. If so, you may need to use the volatile qualifier to disable certain optimizations.

GCC’s optimizers sometimes discard asm statements if they determine there is no need for the output variables. Also, the optimizers may move code out of loops if they believe that the code will always return the same result (i.e. none of its input values change between calls). Using the volatile qualifier disables these optimizations. asm statements that have no output operands, including asm goto statements, are implicitly volatile. (GCC GNU Documentation)

What This Means

The user of the Volatile argument disables said optimizations which may deem the .asm code to be useless in the program, or the code itself is consistent throughout that of a loop. Disabling such optimization allows for the developer to have deeper control and integration of their variables in the scope of the function and program. This explanation is questionable mind you, for the volatile documentation spans pages and pages of examples which contradict or support my own explanation.

Final Thoughts on Ardour’s ASM Code

From what I gather, this code is used for the same purpose that allows for support of a greater array of systems, beit on Windows 32bit systems, or AArch64. The CPU scheduler seems to play a pivotal role in how Ardour handles the various recording modes and cycles which play into real-time analysis of the output. The files themselves seem to be an afterthought, someone’s dedication to updated compatibility to an already stable system. That may be simply the sample bias of looking into the select few files that I did for this analysis.

by RayGervais at March 18, 2017 06:05 PM


Rahul Gupta

Fixing Bugs for Release 0.2

For Release 0.2 i was working on the issues :

  1. #Issue-1754 – Editor menu arrow disconnect on windows 10 on boot-camp
  2. #Issue-1780 – Drop down menus on your project page overflows past the edge of the page.

For release 0.2 i decided to work on two bugs instead of one and i kind of wanted to challenge myself to finish two bugs in the given time-frame. My ultimate goal was to make fix my bugs and get it merged into the master branch.

Working on #Issue-1754

Fixing this Issue was quite interesting. First of all i didn’t had windows 10 installed on my machine so i had to install that and then i was not able to reproduce this bug probably due to my resolution settings as this bug was strictly associated with the native boot-camp windows resolution and was specific OS related bug. I used google developer tool option to find the file and the function which was associated with the bug and also @flukeout and @humphd suggested to change the values so i also took that into account.

To fix this issue i navigated to the “public/editor/stylesheets/editor.css” file and then i edited the #top value in #editor-pane-nav-options-menu:after

css

After changing the value it fixed my bug as the Editor menu was displaying as required.

window

Working on #Issue-1780

Fixing this issue was a bit tricky because this issue only occurred on my project page. On the main homepage it appeared fine but it was showing inconsistency when we go to the my project page. Whenever we try to open the drop-down menu on our project page it’s length overflows past the edge of the page. I used google developer tool option to find the file and the function which was associated with the bug.

To fix this issue i navigated to the “public/resources/stylesheets/userbar.css” file and then i edited the dropdown .dropdown-content function

homohw

After modifying the file and function the drop-down displays correctly as the main homepage.

swcswcv

For my next release my aim would be to target some more difficult bug. This time i will try to give a try on one large bug instead of two issues. I was able to challenge myself successfully this time by working and fixing two bugs in a given time frame and also being able to merge them was one of my big achievements. I learned a lot by working on these issues about the dependencies and the control flows. Also it adds to my understanding of the user-usability. I also realized that some times the things don’t always go as per planned since it’s an open source community, it teaches us to be more active and sincere while working on your bug. Overall, I found solving these 2 bugs in Mozilla thimble to be interesting and fun and also I learned a lot from them.

 

 


by rahul3guptablog at March 18, 2017 05:18 PM


Oleg Mytryniuk

Open Source tools that can make your life easier

So far it is my favorite lab! In my opinion, it would be great to have this lab as one of the first in this course because it teaches many great things that every programmer should be familiar with!

The steps are well described and the lab is straight forward.
In my previous release, I was working with npm dependencies and it was interesting to learn more about npm.

At the beginning, I have just created my own repository on github. The next step was a little bit tricky if you are a Windows user:
Running the command npm init, you are calling a utility that will walk you through creating a package.json file. When you reach the end and accept the final step, if you are a Windows user, you will not be able to do that. There is an issue with GitBash, that is why for this step, I would recommend to use cmd. You can also press ctrl+c in GitBash, but in my case it did not solve the problem, that is why I used cmd to initialize package.json.

After that I have created seneca.js file where I have implemented two simple functions.

The first one validates whether email is a valid Seneca email:

exports.isValidEmail = function (email) {
const senecaEmailFormat = /^[a-z]{3,}[0-9]{0,3}@myseneca.ca$/;
if (senecaEmailFormat.test(email)) {
return ‘valid’;
}
return ‘invalid’;
};

This function creates a seneca email, based on the passed username.

exports.formatSenecaEmail = function (name) {
const completeName = `${name}@myseneca.ca`;
return completeName;
};

The next step(optional one) took some time to accomplish. I always enjoy OSD labs and I did not want to miss a chance to do something new.
I have used the provided tutorial. It was interesting to learn about npm libraries such as co, commander. Following the tutorial, I was able to accomplish the first part, but to finish the second, I had to read npm commander official gitHub page that described how to call functions.

Basically what I ended up with:
-created index.js that is called by seneca command and receives command line arguments.
-imported seneca.js module into index.js
-implemented the logic where index.js, depending on passed arguments, calls particular function from seneca.js module.

Here is my solution:

#!/usr/bin/env node
const program = require(‘commander’);
const seneca = require(‘./seneca.js’);

program
.option(‘-v, –email <email>’, ‘verifys the email address given as a Seneca email’)
.option(‘-f, –format <name>’, ‘formats the name given as a Seneca email’)
.parse(process.argv);

if (program.email) console.log(seneca.isValidEmail(program.email));
if (program.format) console.log(seneca.formatSenecaEmail(program.format));

Screenshot 2017-03-18 00.26.50

It was a very interesting task and I advise people to try to accomplish it.

INSTALLING ESLINT
Another interesting part of the lab – learning ESLINT module. Basically, it is the part I referred to at the beginning of the post saying that it would be useful to have this lab as one of the first labs in this course.
When I was installing the module, I have faced another problem, similar to what I experinced with npm init. I could not accomplish some steps in GitBash because up and down keys did not work. The solution was: to use numbers instead.(ex. option2 = 2).
Why I said that it would be great to learn ESLINT? I just remember a few pull requests I made, that did not have any logical, but had syntax errors. It was extra work for me and for reviewers on github. Usage of such tool, in my opinion, can save time for us.

When I have run eslint, I got many errors.

Screenshot 2017-03-17 13.41.30
Some of them were easy to fix, but some errors were strange to me. I started to read eslint documentation to understand the error description and I saw this awesome command: eslint –fix

Basically this command checks and updates your code to pass the syntax requirements! That is so amazing! Love it!
Finally, I have fixed all errors!

Screenshot 2017-03-17 13.51.57

TRAVIS CI:
I was waiting for learning about Travis Ci since last week and there were two reasons why:

– A few days ago, Travis Ci test was my pain 🙂 I could not submit my pull request because my code was not passed by the tool. That is why I really wanted to learn more about Travis Ci.
– I have a friend who works on one of open-source projects and reviewing of pull requests is one of his responsibilities. I was talking to him about my OSD course experience and mentioned about Travis CI that is used to validate users’ pull requests. He was extremely interested when I told him about this tool because it can significantly help him.

It was not hard to set up Travis CI and just a few minutes and my repository was ready to test pull requests.

To sum up, I enjoyed a lot doing this lab. I feel like these tools are extremely useful in development and in the future I will definitely use them.


by osd600mytryniuk at March 18, 2017 05:06 AM

March 17, 2017


Peiying Yang

Two good editors for programming

Sublime vs. Atom

Why a good editor is necessary?

  • A good editor can help to improve the efficiency.
  • It can help you fix typing mistakes.
  • it well formats your codes.

Sublime

https://www.sublimetext.com/

Sublime text is a cross-platform source code editor. It is free to use, and it supports most of the languages we can see now.

Open files

openfile.gif

Change indent

indent.gif

Find keywords

ctrl + p to search file

ctrl + f to search keyword in this file

ctrl + shift + f to search in many files

search.gif

Split views

split.gif

Change formats

language.gif

Zen coding pulgin

zencoding.gif

Atom

https://atom.io/

Atom is a free and open-source cross-platform text editor.

Most of the short-cut key are same with the Sublime.

Support git

1.PNG

More user-friendly

To customize the editor, we can easily download and install package or theme on the editor without opening another browser.

4.gif

 

Some useful extensions for web page programming that I installed

1.Auto Close HTML package

Under normal circumstances ending tags will be inserted on the same line for inline elements and with\n\t\n in between for block elements. This is determined by attaching an element of the given type to the window and checking it’s calculated display value. You can use Force Inline and Force Block preferences to override this.

2.CSS Autocomplete package

CSS property name and value autocompletions in Atom

3.HTML Autocomplete

4.linter-spell-javascript

 

Conclusion:

I will choose Atom because it is open source. Many people are working with it to make it better. Compare with Sublime, it is more user-friendly. We can very easy to install a package without opening another browser.


by pyang16 at March 17, 2017 09:46 PM


Andrew Smith

Markdown is readable? Yeah, and my prose is gold.

Too many people I know have been telling me in the last few years how wonderful and beautiful markdown is. I never believed them, because I have quite a bit of experience with wiki markup, whose proponents have been saying exactly the same things about wiki markup for several years prior to markdown’s release.

Just a few minutes ago I had to read some documentation for installing a piece of software I just cloned off Github (there’s an obvious rant there for another time). It was a readme.md, and it looked like this:

On what planet must you live to be telling me that this is readable? I’ve written Perl code as a student that was more readable than that.

Bah, it never ends. Next thing you know writing machine-specific assembly code will be cool again.

by Andrew Smith at March 17, 2017 06:02 PM


Zenan Zha

Brackets vs. Atom

Brackets vs. Atom

Brackets and Atom are open source code editor that developend by Adobe and Github sepriately.

In this blog, I will introduce some basic operations in these two code editor. Hopefully this blog will give you some idea about which one should you choose.


Q1: How to open a file, a folder of files?

Left: Atom vs. Right: Brackets


Q2: How to change your indent from tabs to spaces, 2-spaces, 4-spaces, etc?

Left: Atom vs. Right: Brackets

Q3: How to open the editor from the command line?

Just call: atom || brackets

Q4: How to find things?

Click on the dorp down menu: Find.

Q5: How to split the screen into multiple panes/editors/views?

Left: Atom vs. Right: Brackets


Q6: How to change keybindings?


Atom: 


Q7: What are some common key bindings?


Atom -> Find common key: Ctrl + Shift + P


Q8: How to enable/use autocomplete for coding HTML, JS, CSS, etc?


Atom: Click on Tab.

Brackets: Click on Enter.

END.

Download: Atom -> https://atom.io/ ||  Brackets -> http://brackets.io/


Add-On:

Many useful addon applications are availiable on-line for people to download. Here I will use several addons on Atom as example to show you how those addons works.

In Atom, people can simply go Setting to install addons.






More interesting Add-On: https://atom.io/packages (linter, minimap-pigments, ect.)

by zenan zha (noreply@blogger.com) at March 17, 2017 06:59 AM


John James

SPO Project Function Pick

I picked the function STRCSPN.  This function scans str1 for the first occurrence of any of the characters that are part of str2, returning the number of characters of str1 read before this first occurrence. The search includes the terminating null-characters. Therefore, the function will return the length of str1 if none of the characters of str2 are found in str1.

Here is the code in the GNU Library:

STRCSPN (const char *str, const char *reject)

 {

 if (__glibc_unlikely (reject[0] == '\0') ||
 __glibc_unlikely (reject[1] == '\0'))

 return __strchrnul (str, reject [0]) - str;



 /* Use multiple small memsets to enable inlining on most targets. */

unsigned char table[256];
 unsigned char *p = memset (table, 0, 64);
 memset (p + 64, 0, 64);
 memset (p + 128, 0, 64);
 memset (p + 192, 0, 64);




 unsigned char *s = (unsigned char*) reject;
 unsigned char tmp;
 do
 p[tmp = *s++] = 1;
while (tmp);
 s = (unsigned char*) str;
if (p[s[0]]) return 0;
if (p[s[1]]) return 1;
if (p[s[2]]) return 2;
if (p[s[3]]) return 3;

 s = (unsigned char *) PTR_ALIGN_DOWN (s, 4);

unsigned int c0, c1, c2, c3;
 do
{
s += 4;
c0 = p[s[0]];
c1 = p[s[1]];
c2 = p[s[2]];
c3 = p[s[3]];
}

while ((c0 | c1 | c2 | c3) == 0);
size_t count = s - (unsigned char *) str;
return (c0 | c1) != 0 ? count - c0 + 1 : count - c2 + 3;
}
My plan Is to rewrite this code, and test to see If I can improve the run time of this function or make it more optimal with inline assembly for an arm processors.
I am not sure on how I could rewrite this function 100%

by johnjamesa70 at March 17, 2017 05:09 AM

SPO Lab 7 (Part A – Part B) Fun with Inline Assembly!

So in this class, we had worked with C and with Assembly separately.  Now we get to work with it together! That’s right we get to use inline assembly with our C code!!! This was a hard lab, but I feel better for it.

Part A:

Convert one of our C functions into assembly. Which seemed kinda daunting and aggravating because, now you got convert lovely C code, into Assembly…

So we turned this for loop:

for(p = output; p < output + sizeof(int16_t) * SIZE;){
  idx = (unsigned short) data[i];
  res.sum += output[i] = table[idx];
  p = table + i;
}

To this:

for(p = output; p < output + sizeof(int16_t) * SIZE;){
 __asm__ ("LD1 {v0.8h}, [%0]; \
 DUP v1.8h, w20; \ 
 SQDMULH v0.8h, v0.8h, v1.8h; \ 
 ST1 {v0.8h}, [%0]"
 : //no output
 : "r"(p),"r"(volint) //register holding pointer (refer as %0), then volint register (refer as %1)
 :
 );
 p += 16;
}

Are code turned out to be a lot faster, then the normal version. We tested  with 1 million data, and while it took the normal system 15 sec, the inline assembly took 1

I believe inline assembly is very useful and can help benefit a processor, but the main problem is that you need to optimize it per assembler which can take a lot more time than just writing it with C.

Part B:

This Part of the lab we had to find an open source project that uses inline assembly. I choose the project to mosh, which only really uses inline assembly one time within the entire project.

Here is the code:

#if __GNUC__ && !__clang__ && __arm__
 static inline block double_block(block b) {
 __asm__ ("adds %1,%1,%1\n\t"
 "adcs %H1,%H1,%H1\n\t"
 "adcs %0,%0,%0\n\t"
 "adcs %H0,%H0,%H0\n\t"
 "it cs\n\t"
 "eorcs %1,%1,#135"
 : "+r"(b.l), "+r"(b.r) : : "cc");
 return b;
 }
 #else
 static inline block double_block(block b) {
 uint64_t t = (uint64_t)((int64_t)b.l >> 63);
 b.l = (b.l + b.l) ^ (b.r >> 63);
 b.r = (b.r + b.r) ^ (t & 135);
 return b;
 }
 #endif

Questions about this code:

  • How much assembly-language code is present
    • Not a lot, it is only used once in the project
  • Which platform(s) it is used on
    • The assembly language is for arm processors
  • Why it is there (what it does)
    • It improves this function for assembly processors to improve its portability between assembly language
  • What happens on other platforms
    • On other platforms, it run just the C version of the code in the else statement
  • Your opinion of the value of the assembler code VS the loss of portability/increase in complexity of the code.

I feel like if you write assembly for every processor you make it more challenging and more complex for people developing that project.


by johnjamesa70 at March 17, 2017 04:50 AM


Zenan Zha

Thimble VR: Create A-Frame Starter Kit (Issue 1609)

I was planning to fix the bug 1609 as the second part of my realease one.
Howeve, it is such a big project that need a lot of time to finish.

As a result, I will work on it as my second release.

Hopefully I could finsh the docqumenting faster...

Here is the my issue:#1609

by zenan zha (noreply@blogger.com) at March 17, 2017 02:50 AM

Releasse 2 - need to add more text

This release is not finish yet simply because there is too much text file I need to write.

I thought this would be a simple fix untill I find out I need to write a whole page of documentation for teacher to understand where to start.

I do need some more time to finish writting the "AFrame Teaching Kit". Also, I would invite some of my Canadian friends to help me proof reading what I wrote and will write.

The programming part of thie release is pretty clear and easy to edit. I use the search function inside the github to find where main.js locate and how this file interact with each other. The commets inside the webpage also helped a lot for positing the file I need. The url of the remix page will be structed through run time, if some one what to work on that they should be really carefull about it. Searching for keyword "Comic" in the Github project will give you some path of the file you need.

I did a lot of try and also made a lot of mistake. The AFrame Kit finally looks fine for me except the "AFrame Teaching Kit". Please forgive me cause documenting is really my worst part in programming...

Hopefully the pull request will be on at the end of this week. I could also push my recient works on if needed, but I think they won't accept an half done Teaching Kit.

by zenan zha (noreply@blogger.com) at March 17, 2017 02:49 AM

VirtualBox causes SYSTEM_SERVICE_EXCEPTION / Windows 10 Blue Screen of Death - Disable Hyper-v

Disable Hyper-V!!!
Disable Hyper-V!!!
Disable Hyper-V!!!

It is sooooooooooooo important, so I have to say three times.

Windows 10 automaticaly enabled hyper-v (for me, it is).
Enable Hyper-V will cause Blue Screen of Death when you shut down your virtual machine on VirtualBox.

If you have the same problem, just try is first. Disable Hyper-V is not harmful for you system, so just try it.

How to disable Hyper-v:
Go to Control Panel ->



I spend hours to find this issue. I hope this blog could help someone.

If you find any mistake I made in this blog. Please tell me in the comment. Thank you.

by zenan zha (noreply@blogger.com) at March 17, 2017 02:48 AM

March 16, 2017


Oleg Mytryniuk

Release 0.3.

Right now we have no Internet at Seneca and it seems to be a perfect time for writing my blog post about Release 0.3 🙂

The first step, as usual, is to pick up a bug and this time it was a little bit hard to choose one. Working on my previous releases, I have noticed that I prefer to fix existed bugs instead of implementing new features. I think it is caused by the fact that I do not feel very confident working on JavaScript, or it is better to say, comparing to other languages, I do not have a lot of experience working with JavaScript. Basically, that it is why I think that fixing of bugs is the best solution: I like it, I feel confident and I think: why I should implement something new if we still have some bugs to fix.
As I mentioned in my post about Release 0.2., in my Release 0.3., I would like to work on something different from Thimble. However, I think that it is better for me to stick to Thimble, as I spent a lot of time with this app. It took some time to choose a bug, and I am very thankful to professor Humphrey, who helped me with my choice. He has advised me a few bugs. I spent about a day trying to reproduce these bugs and finally I decided to work on this bug.

I would also like to mention that I did not want to get any old bug 🙂 I guess everyone has same thoughts about old bugs: “They are extremely hard if they still are not fixed”. As a result I got OLD bug and related to Brackets 🙂

I knew about this bug before, because the professor has covered it in one of his classes.
Basically what is happening:
You are typing something in editor and then immediately trying to change the title of your project; however after a few seconds your cursor automatically focuses on the editor again instead of focusing on the title name.

bug3

Working on this bug, I use same technique, that helped me with my previous releases.

1. Reproduced the bug. It took some time to reproduce the issue. I have noticed later that I suppose to click “Save” button.
2. Spent some time to understand the path flow of the error (which modules are involved, which function is called, what is happening)
3. Tried to make some changes to see how it impacts the app.
I am a big fan of putting console.log or alerts in the code. It really helps me a lot to see what is happening with the code. I also work with Chrome debugger, that in my opinion is a very good tool to fix the problems

After performing all these steps, I have found the line that causes the error.

MainViewManager.focusActivePane();

If we work with an editor and make any changes, the app automatically saves the changes and calls focusActivePane(). This function puts focus on the editor. In our case, when we are changing the title right after typing in editor, the app understand typing in title as changes in the editor and calls focusActivePane() that moves the cursor from the title field to the editor.

The easist solution – just to get rid of the function. Really, I tried to comment the function  call out and here the result.

bug3solution
It seems to work. However I would really like to work more on teh bug, to see how my changes can affect the app.

I am not sure, but I think I will need to create a flag that will check whether we work with editor or not, and depending on it will call or will not call focusActivePane().

Looking forward to fixing this bug!


by osd600mytryniuk at March 16, 2017 09:25 PM


Ray Gervais

Writing Good Contribution Messages

An OSD600 Lecture

My Contribution Messages

On Tuesday, the class was told a key fact that I imagine not a single in the room had ever thought before; commit messages, pull requests, and even issue descriptions, are the sole most challenging item for any developer to get right. This was in the context of working in an open source community. I was curious, so I looked into my pull request titles, commit messages and pull request descriptions. I’ve included a few of each below for the curious:

Fixed package.json to include keywords

Issue Description

I noticed that you did not have keywords for this module, so I added ones that seemed relevant. If you’d like others, or different ones, I’d be happy to add them. (Relating back to the fixed package.json to include keywords pull request)

Commits

  • Added keywords to package.json
  • Updated package.json to include keywords (formatted properly)
  • Fixed spelling of Utility in Keywords

Implements Thimble Console Back End

Issue Descriptions

This is the first step toward implementing the suggested Javascript console

Commits

These are all based around the Thimble Console enhancement mentioned above, with each commit deriving from my add-new-console branch (which I may add, according to Mozilla’s repository standards, is not a good branch name, and instead should be named “issue ####”).

  • Added ConsoleManager.js, and ConsoleManagerRemote.js.
  • Added ConsoleShim port. Not Completed yet.
  • Added data argument to send function on line 38 of PostMessageTransportRemote.js
  • Removed previous logic to PostMessageTransportRemote.js
  • Added ConsoleManager injection to PostMessageTransport.js
  • Syntax Fix
  • Fixed Syntax Issues with PostMessageTransportRemote.js
  • Fixed Caching Reference (no change to actual code).
  • Added Dave’s recommended code to ConsoleManagerRemote.js
  • Added consoleshim functions to ConsoleManagerRemote.js
  • Added isConsoleRequest and consoleRequest functions to consoleManager.js
  • Changed alert dialog to console.log dialog for Bramble Console Messages.
  • Fixed missing semicolon in Travis Build Failure.
  • Removed Bind() function which was never used in implementation.
  • Removed unneeded variables from ConsoleManager.js.
  • Fixes requested changes for PR.
  • Updated to reflect requested updates for PR.
  • Console.log now handles multiple arguments
  • Added Info, Debug, Warn, Error console functionality to the bramble console.
  • Implemented test and testEnd console functions.

Looking Back

Analysing the commit messages alone had shown that though I tried, my commit messages alone were not as developer friendly, a contradiction to a few-weeks back me who thought his commit messages were the the golden standard for a junior programmer. Perhaps a fusion of previous experience and recent teachings, but there is a definitive theme to the majority of my commit messages -often describing a single action or scope. This was a popular committing style among some of the professors at Seneca, and even Peter Goodliffe who wrote the must-read Becoming a Better Programmer claims short, frequent commits that are singular in changes or scope as a best practice. The issue which can be seen above, is not that I was following this commit-style, but the I described in the commit. Looking back now,

removed bind() function which was never used in implementation

would be arguably the best of the commit messages had I not included the ‘()’. Here is why:

  1. It address a single issue / scope, that being the dead code which I had written earlier.
  2. Explains in the commit message the reason for removing the code, making it easier for maintainers to get a better sense of context without viewing the code itself.

There are some items I’d improve from that commit message, such as rephrasing ‘which was never used in the implementation’ to ‘which is dead code’. This is being much more specific to the fact that the function is never being used, whereas the current message is claiming only in the current implementation alone is it not used. Much clearer.

Furthermore, I think it’s clear that the pull request messages are simply not up to a high enough standard to even be considered ‘decent’. This area is one that I will focus on more in the future, for it is also the door between your forked code, and the code base you’re trying to merge into. Not putting a worthwhile pull request description which provides context for the maintainers, an explanation of what the code does and even further comments or observations which may help down the road.

To conclude this section, I’ll touch briefly what was the most alien concept to yours truly, and how this week’s lesson open my eyes to developer and community expectations. Regardless of commit messages, one of the most important areas to truly put emphasis on is the Pull Request title, which is what you, the maintainers and code reviewers, and even the community see. Though mine encapsulate the very essence of what my code’s purpose is, the verbosity may be overlooked or identified as breaking a consistent and well established pattern; which is the ‘fix #### ’ pattern. This pattern allows for GitHub to reference said issue in the pull request, and close it when the request is merged into the master branch. My titles did not follow said pattern, meaning that a naive developer such as yours truly would reference the issue itself in the description, which means the code maintainer also has to find your issue and close it manually after the merge.

Suggestions

Dave shared with us this link, describing it as one of the best pull requests he had discovered from a contributor. Analysing it, it was apparent that the contributor put effort, time and energy into everything related to his code and description. His outgoing and enthusiastic style of writing was mixed with humble opinions and emojis, creating a modern piece of art; mixing color and text, before and after, code. His commit messages follow a playful theme where appropriate, and a much more to-the point description where essential (such as major code changes). Looking back now, I can see why Dave and a few others regard this pull request as a pivotal teaching tool for proper documentation techniques when working in an open source community.

Such suggestions are not aimed at the hobbyist or junior developer alone, for a quick search of various popular open source projects points out that all developers struggle with the above at times. An interesting note, since we as juniors also strive to emulate the style of said more experience, creating a trickle-down effect at times. This isn’t to point the flaws of bad messages to the average programmer, or senior developer, but to simply share it with those who’ve been in the industry as well. We are all at fault, and the learning experience is eye-opening.

by RayGervais at March 16, 2017 04:57 PM


Christopher Singh

The long and gruelling process of working with code you don’t know – OSD600

This release I’ve been working on https://github.com/mozilla/brackets/pull/601 for a long while now is a good example. It’s especially something else when you have to write documentation for it. There are some concepts that I still don’t understand but have been closely copying/following along already existing code. I know of a student that has undergone something similar with almost the same code. And I think it’s generally a bad idea to not be consistent with the rest of the code in the project because you’ll inevitably be asked to change it — and this isn’t necessarily a bad thing.

However, one thing that has surprised me about this course is the amount of help you can get. Generally, with assignments and such, you’re on your own. If you hit a barrier, you have no one but yourself, and if you don’t figure it out in time, you’re done. But with a programming assignment, there are thousands of ways to solve a problem. With this course, the developers in charge of the open source project you’re working on expect code that assimilates and is consistent with their code. It’s different, but at least they help and actually care. After all, it is their product.


by cgsingh at March 16, 2017 01:38 PM


Oleg Mytryniuk

Open Source Experience?? That is a big plus, in my mind

Since the semester coming to the end, and study break was like a milestone, I have decided to write my own observations about OSD course and share my thoughts, especially after i heard from a few people that they read my blog 🙂

As a peer tutor I talk to many students, and some of them ask me about which courses they should choose as professional options. It is my own opinion, but I suggest them to choose OSD600, DSA555 and MAP courses. From my experience, these courses are the hardest, but at the same time, they are the courses where I could learn a lot. They teach you something, that will allow you to be unique among other students(at least college students) who apply for their first full-time job.

Just imagine that in your resume you mention that you have experience working on open-source projects like Mozilla or Adobe Brackets. How many people do you expect to see with this experience? The answer is: not many.

I just remember the conversation with one of my friends, who told about his friend’s interview experience: “I had an interview and believe me or not, they did not ask me about my big academic projects, they asked me about my own small project with ReactJS “. Uniqueness – it is what can be your key to success.

By the way, i had a chance to talk to our program coordinator today, in our conversation Mr. Tipson, asked me about the courses I have this semester. I told him that I have a few courses and really enjoy OSD and DSA. He was glad that I really enjoy these courses, as well as he mentioned that, in his opinion, it is extremely good for students to have such “university level” courses here at Seneca. I absolutely agree with him.

Coming back to my own experience with Open Source course, I would like to tell you that I decided to participate in Mozilla Chat system.

The reason why i have decided to register in the system: to ask people about my own issue I faced working on Release 0.2. And starting as a person who was looking for advice, later, I became one who helped others. It is hard to explain what i was feeling when i was helping somebody from another part of the world to figure out how to setup Thimble and Brackets. I just remember how I was welcomed to contribute to Thimble open-source, and i want new contributors to have same feelings. It is basically same as “Do to others as you would have them do to you”. How happy I was to receive thanks from the person i helped to.

Even working on open-source projects we do not talk tet-a-tet as in real life, but still we can easily feel the altitude of the person you are chatting with. I think it is very important. Just imagine, how you feel when after your pull request you get something:

A) “You did it wrong”
or
B) “Thank you very much. Everything is good, I would just advice you to fix…..”

Both answers are acceptable, but they have different impact on you as a newbie in Open Sorce net. Who knows, maybe receiving the answer A, I would probably stop actively participate in Open-Source Community, but as i got answer B, I felt like people see me as a person who can make their project better, not as a person who does something wrong. That is great! 🙂


by osd600mytryniuk at March 16, 2017 12:33 AM

March 14, 2017


Matt Welke

New Documentation Framework, Deleting My System Root, and Becoming One with JavaScript

It’s been a busy few days recently. There are a few things worth talking about.

New Documentation Framework

The first is that I’ve decided to switch gears in terms of how to present the documentation for Rutilus. Previously, we were using a custom solution by using React to make a Single Page Application (SPA) for the documentation. React makes doing UIs easier which is nice, and SPAs provide good performance, so these were pluses. This was probably better than just writing HTML and CSS by hand. However, when you roll your own solution, it’s often hard to cover every base like relying on a framework might provide for you.

We had a few issues. The most prominent of them was just how much effort had to go into styling it to make it look nice. Things like how to make lists appear properly. What happens when there’s too many links in the nav list? It would bleed over into the other section. It didn’t look very good once we started adding a lot of documentation. Plus, it wasn’t mobile friendly. To make it mobile friendly would have required rewriting the CSS that went into it. This would be too much work at this point.

Writing documentation for it was also very slow. I had to understand JSX (React’s HTML/JS hybrid) to be able to write documentation (and therefore, so would any future Rutilus maintainers). The thing had to be compiled using Webpack, so every time I wanted to see a change I made in the browser, I had to wait at least 5 seconds. This hurts productivity.

I investigated using static site generators. The premise of these frameworks is amusing, but it makes sense for things like documentation. In a world of dynamic sites, these frameworks allow you to write plain text, or often something like markdown, and they then parse it and convert it to a series of HTML, CSS, and sometimes JS files. They boil your text down into a static website. But for things like documentation, this is all you need! You don’t need a database. I found one in particular called Mkdocs. It’s relatively new, only existing for about two years as of today, but it has a polished feel, a 1.0.0+ release, and a lot of activity on GitHub. These are the positive signs I look for when it comes to choosing a library upon which to rely. In fact, the team behind it even fixed a bug already that I already found and reported. See the next section of my blog about that. 😛

Deleting My System Root

So this is probably the funniest and most destructive thing I’ve ever done. Long story short, I deleted everything I could on my computer without sudo privileges, starting from my system root. This wiped my home folder. I was listening to YouTube at the time, and when I did it, my sound cut out, my Unity shell disappeared, my Windows key stopped bringing up the shell, the file explorer stopped functioning… I basically broke the matrix. But at least I didn’t lose any work, since I have a habit of pushing to remote very frequently as I work, and only doing my work using Git.

Here’s how I did it:

  • Mkdocs has a build command. It’s “mkdocs build”.
  • If you run the build command with the “–clean”, it will delete everything in the build directory before doing a build.
  • You can modify the configuration file for your Mkdocs project by changing its build directory.

These all make sense, but the way I used them was just by pure luck a very bad way of using them.

  • I changed my build directory to “/”, since I wanted to build to the *project* root.
  • I ran “mkdocs build –clean”.

Mkdocs proceeded to delete everything it could from my system root, just as I had instructed it to do.

The community behind Mkdocs has proven to be a healthy one. I reported this to them via a GitHub issue, mostly as a warning against doing something as stupid as I just did. I didn’t really consider it a bug. I just wanted to provide some advice. They considered it a bug. And within a day of it being reported, they had issued a release to fix the bug. Yay open source!

At least I’ve gotten plenty of experience setting up Linux systems since I end up reformatting so often. I plan to not tell my tools to delete my system root again in the future.

Becoming One with JavaScript

With my CDOT presentation coming up tomorrow, where my team mate and I will be presenting about modern asynchronous programming with JavaScript, my team mate and I have had to study up on the inner workings of the programming language to prepare. I’ll be starting off our presentation by describing the inner things that make JavaScript tick and enable it to do the amazing asynchronous things it does. Once again, the Mozilla Developer Network (MDN) has some great material on the subject, and I also encountered a great YouTube video.

It all comes down to JavaScript being designed from the beginning to be used for asynchronous programming. As a reminder, asynchronous programming means that operations that have to wait on outside things do not block. They don’t pause execution and wait for that outside thing (a disk operation, a network request, a user prompt, etc) to finish. Other things can happen in the background.

If you have experience with other programming languages, such as C, Java, or Ruby, you’ll know that that feature isn’t unique to JavaScript. Other languages have support for asynchronous programming too. You would use threads or similar tools to achieve this. But JavaScript is a single threaded runtime (in the browser at least, it is true that Node.js, which runs on the server, has access to multiple threads).

It’s a big difference from other programming languages I’ve used in the past. I actually encountered a Reddit thread on the Ruby on Rails subreddit, which I frequent, where this concept still boggles people:

Nobody seems to know why JavaScript web development is so different from other types of web development, with so much thought having to be given to achieving old fashioned, synchronous programming. I explained, in a reply on that thread, that there were key differences to how to program with JavaScript:

The way you program with them will greatly differ because of JavaScript providing built in support for asynchronous programming (out of necessity, for the web browser). Google the “event loop” and “message queue” to learn about that.

Node I/O code is async by default, made sync explicitly (by using promises etc). Rails I/O code is sync by default, made async explicitly (by using threads or libraries providing futures and promises).

An amusing aspect of promises is that in Rails, we use them to enable async programming, but in Node, we use them to write sync code more easily.

JavaScript features a “message queue” and “event loop”, both described in more detail in the MDN article linked above. It basically comes down to this:

  • Invoking a function causes a frame to be added to the *stack*.
  • Functions have the ability to allocate memory on the *heap*.

And here’s where this starts to differ from traditional programming languages:

  • By accessing APIs (Called webapi in the browser, and C++ APIs for Node.js), a JavaScript program can add a message to the *queue*. A message is an instruction to invoke a certain function.
  • The event loop continuously watches the stack, and when the stack is empty, it grabs the message at the front of the queue, which will cause a function to be invoked, causing a frame to be added to the stack.
  • The APIs include a feature to add a message to the queue after waiting a certain amount of time.

When you put all that together, you get an explanation of why callbacks exist, how they work. You’re calling a function and passing a function that will be run at some point in the future. The asynchronous function you called (for example, Node’s fs.readDir), has access to the APIs, and will eventually call a function like setTimeout, which adds a message to the queue. Voila. Non-blocking. Reading that directory doesn’t block because the code that runs when the read is complete is going to be represented by a message that isn’t added to the queue until the read is complete. And therefore that code to run won’t even run unless the stack is empty, so higher priority work (already on the stack) will complete first.

If this sounded overwhelming, I suggest watching that YouTube video. It’s very enlightening.

One thing’s for sure… I’ve become much more immersed in JavaScript than I had any idea I would be, and I have gained a huge respect for the language.

 


by Matt at March 14, 2017 03:02 PM

March 13, 2017


Rahul Gupta

Lab 6 – Picking and Learning a Good Editor

For this lab i wanted to try Visual Studio Code and Atom. I have never used these editors before so i decided to learn how to work with them and their addons and their common workflows. Previously i have used Notepad++ , Eclipse and Sublime as code editors.

Visual Studio Code is a source code editor developed by Microsoft for windows, Linux and macOS. It was initially released in April 29, 2015. It is written in multiple languages and is available in multiple languages. Visual Studio Code includes support for debugging, embedded Git control and has many extra features such as syntax highlighting, code refactoring. It’s available repository is- Repository and we can download it from the website –

https://code.visualstudio.com/

Atom is a free and open source editor developed by Github. It is available in for macOS, Linux and Windows with support for plug-ins written in Node.js and embedded Git control. It was initially released in February 26, 2014. It’s available repository is – Repository and we can download it from the website –

https://atom.io/

Here’s is how Atom looks with the entire Mozilla Brackets project opened in it:

Screen Shot 2017-03-13 at 7.01.39 PM

 

Here’s is how Visual Studio Code looks with the entire Mozilla Brackets project opened in it:

Screen Shot 2017-03-13 at 6.49.17 PM

Verdict – 

After experimenting with these two editors I would like to use Atom more than Visual Studio Code as i felt more easygoing and comfortable with Atom. It has very simple and efficient UI design and is very easy to use for new users. I was pleased by the themes that are bundled with atom. It’s easy to customize and style Atom. We can tweak the look and feel with CSS/Less and add major features with HTML and Javascript.

For Atom i will be demostrating the 5 following tasks with screencast demostration

  1. How to open a file, a folder of files in atom
  2. How to open the editor from the command line
  3. How to change your indent from tabs to spaces, 2-spaces, 4-spaces, etc?
  4. How to split the screen into multiple panes/editors/views
  5. How to change the theme of the editor

How to Open a file, a folder of files in Atom

In order to open a file we can simply right click the file and open it with Atom and for folders when we first open Atom it displays a welcome screen with an option to open a project. Also we can open a project by going to File->Open > and selecting the project we want to open the project.

Opening a simple index.html file

clipfile

Opening a folder of files in atom

clipfolder

How to open Atom from Command Line

In order to open Atom from command line we need to first install shell commands for Atoms. This can be done by Atom->Install shell commands.

cmd

How to change your indent from tabs to spaces, 2-spaces, 4-spaces, etc ?

By default the identation in atom is set to 2 spaces but we can change it by going to Atom->preferences->Editor ->tablength and then change the value accordingly

tabs

How to split the screen into multiple panes/editors/views?

In order to split the screen into multiple panes and views we just need to make a right click and select which option do we want. Also we can have multiple views at the same time on every side (left,right,down).

views

How to change the theme of the editor ?

In order to change the theme of the editor we need to go to Atom->preferences->Theme

theme

Useful Atom Extensions-

  • todo-show
  • qolor
  • tree-view-panes
  • autocomplete-modules
  • atom-clock

 

todo-show

Finds all TODO, FIXME, CHANGED, XXX, IDEA, HACK, NOTE, REVIEW, NB, BUG, QUESTION, COMBAK, TEMP comments in your project and shows them in a nice overview list.

Screen Shot 2017-03-17 at 12.09.42 AM

qolor –

This extension is very useful for the Database developers. With the help of qolor we can provide distinct color to our SQL queries. Qolor applies semantic highlighting to your SQL queries by matching tables to their aliases. All colors of tables are deterministic and based on their name. They will be the same on any Atom editor anywhere.

Screen Shot 2017-03-17 at 12.13.11 AM

tree-view-panes

Show open files / panes on the top of the tree view. This package aims to provide an alternative to the functionalities of tabs.

tree

autocomplete-modules

This extension autocompletes the require/import statements. It pretty much includes the files extension,directories,modules and plugins in the completion.

auto

atom-clock

Display a customizable clock in the status bar. If ticked, a clock icon will be shown to the left of the time. It is unticked by default. It specifies the format to use when displaying the date.

Screen Shot 2017-03-17 at 12.26.52 AM

Overall, I enjoyed learning to work with both editors. There was a lot to learn and i would definitely recommend Atom editor to the developers.

 


by rahul3guptablog at March 13, 2017 10:23 PM


Henrique Coelho

Fixing memory problems with Node.js and Mongo DB

Now that the basic functionality of Rutilus is done, I spent some time improving the memory limitations that we faced. In this post I will list the problems we faced and how I solved them.

Observation: We were using Mongoose for these queries, and not the native Node.js driver.

1- Steps in the aggregation pipeline taking too much memory

From the MongoDB manual:

"Aggregations are operations that process data records and return computed results. MongoDB provides a rich set of aggregation operations that examine and perform calculations on the data sets. Running data aggregation on the MongoDB instance simplifies application code and limits resource requirements."

So, obviously, a pipeline such the one below would need to have memory available to perform all those stages:

ZipCodes
  .aggregate([
    { $group: {
      _id: { state: "$state", city: "$city" },
      pop: { $sum:  "$pop" }
    }},
    { $sort: { pop: 1 }},
    { $group: {
      _id : "$_id.state",
      biggestCity:  { $last:  "$_id.city" },
      biggestPop:   { $last:  "$pop"      },
      smallestCity: { $first: "$_id.city" },
      smallestPop:  { $first: "$pop"      }
    }},
    { $project: {
      _id: 0,
      state: "$_id",
      biggestCity:  { name: "$biggestCity",  pop: "$biggestPop"  },
      smallestCity: { name: "$smallestCity", pop: "$smallestPop" }
    }}
  ])
  .exec((err, docs) => {
    ...
  });

The problem we were having in this case was: we did not have enough memory to perform the stages, even though we did have enough memory for the output. In other words: the output was small and concise, but we needed a lot of memory to do it.

The solution for this was easy: we can simply tell Mongo to use disk space temporarily to store the data. It probably is slower, but it is better than not being able to run the query at all. To do this, we just needed to add an extra step (allowDiskUse) to that method chain:

ZipCodes
  .aggregate([
    ...
  ])
  .allowDiskUse(true) // < Allows MongoDB to use the disk temporarily
  .exec((err, docs) => {
    ...
  });

2- Result from aggregation pipeline exceeding maximum document size

For queries with a huge number of results, the aggregation pipeline would greet us with the lovely error "exceeds maximum document size problem". This is because the result of an aggregation pipeline is returned in a single BSON document, which has a size limit of 16Mb.

There are two ways to solve this problem:

1- Piping the results to another collection and querying it later

2- Getting a cursor to the first document and iterating through it

I picked the second method, and this is how I used it:

const cursor = ZipCodes
  .aggregate([
    ...
  ])
  .allowDiskUse(true)
  .cursor({ batchSize: 1000 }) // < Important
  .exec(); // < Returns a cursor

// The method .toArray of a cursor iterates through all documents
// and load them into an array in memory
cursor.toArray((err, docs) => {
  ...
});

The batchSize refers to how many documents we want returned in every batch, but according to the MongoDB documentation, this will not affect the use of the application because most results are returned in a single batch.

3- JavaScript Heap out of memory

After getting those beautiful millions of rows from the aggregation pipeline, we were greeted by another loverly error: "FATAL ERROR: CALLANDRETRY_LAST Allocation failed - JavaScript heap out of memory". This happens when the Node.js Heap runs out of memory (as you probably inferred from the description of the error).

According to some sources on the internet, the default memory limit for Node.js on 32-bit systems is 512Mb, and 1Gb for 64-bit systems. We can increase this memory limit when we are launching the node.js application with the option --max_old_space_size and specifying how much memory we want in Mb. For example:

node --max_old_space_size=8192 app.js

This will launch the app.js application with 8Gb of ram instead of 1Gb.

by Henrique Salvadori Coelho at March 13, 2017 09:04 PM


Rahul Gupta

DPS 909 Lab 05 – Release 0.2 Preparation

For release 0.2 i decided to choose two bugs from thimble-

  1. #Issue-1754 – Editor menu arrow disconnect on windows 10 on bootcamp
  2. #Issue-1780 – Drop down menus on your project page overflows past the egde of the page

For the first bug i.e. Issue 1754 it was possibly a really specific OS bug. The bug actually showed up in bootcamp windows 10 only. I was not able to reproduce the bug in any other OS. Particularly on my bootcamp it showed up expectionally well only on Microsoft edge browser. There was apparently a misalignment between the editor menu and the associated triangle pointer on windows 10 running with bootcamp. The possible explanation of this bug could be explained with the display settings and UI scaling. 150% scaling on a retina display rather than 200% UI scaling was specifically the issue. The triangle like pointer would crop up sometimes when just zooming the browser as well. I was really interested in this bug and wanted to give it a short. Also my professor Mr. David Humphrey @humphd suggested me some ideas as well.

Screen Shot 2017-03-13 at 3.40.31 PM

Issue 1780 deals with the drop down menus issue. Whenever we try to open the dropdown menu on our project page it’s length overflows past the edge of the page. It is actually places correctly on the homepage, its just not properly aligned in our project page. The bug is fairly simple and modifying the css file to that of the homepage values should actually fix it. I was able to reproduce the issue on my machine and with the help of developers tools i was able to point out where i need to make the chanages to fix the issue.

clip

On the hompage it appears exceptionall well –

Screen Shot 2017-03-13 at 3.54.03 PM

I am really looking forward on working on these issues. In order to fix these bugs i plan on following the suggestion given by @humphd. Also with the use of dev tools i was able to figure out the files for modifying the change. Hopefully everything goes according to the plan and it fixes the bug.

 


by rahul3guptablog at March 13, 2017 07:55 PM


Catherine Leung

Assembly

While a student, I learned a bit of assembly as part of degree.  I never had a strong feeling about it.  I wasn’t particularly interested in it as it just wasn’t something I thought I would do. I didn’t see the point of it.  I know of its existence, knew what it meant.  It gave me a sense of the different parts of how a computer works.  I just never really thought I would ever want to write code in it.  So I chalked it up to one of those courses I had to take and left it at that.

In the SPO600 course, we look at how code can behave when it hits the machine level.  Two programs compiled on different processors will generate different assembly.  I knew that when you write C code, the compilation happens in essentially 3 phases

preprocessor –> compilation –> linking

I knew that what came out of preprocessor was still basically C.  What came out of compilation was an object file, and linking puts it all together.

I did not know how to look the object files though until now.

To stop the compilation after compilation phase you use the -c flag which brings you .o

For both the executable and the object file you can look at the assembly by using:

objdump -d <file>

 


by Cathy at March 13, 2017 07:10 PM


Joshua Longhi

Course Project – Phase 0

For this project we are going to be optimizing functions from glibc, the standard c library. The function I am choosing to optimize is mpn_cmp() in the stdlib cmp.c file. This functions takes two low level integers and returns 1 if int1 > int2, 0 if equal and -1 if int1 = 0; i–)
{
op1_word = op1_ptr[i];
op2_word = op2_ptr[i];
if (op1_word != op2_word)
goto diff;
}
return 0;


by jlonghiblog at March 13, 2017 04:43 AM

March 12, 2017


Theo D

Blog Post 8 – Finding an Editor (Lab 6)

This week I worked on Lab 6 which involved the research and dissection of using two new editors that I haven’t used before. For this I decided to do this by choosing the editors Atom (about 3 years old) and Visual Studio Code (about 2 years old). I chose Atom because I heard many students having good things to say about this editor. As for Visual Studio Code, I chose it because I was curious to see how this differed from the Microsoft’s pricey bigger brother Microsoft Visual Studio. That being said, contrary to first thought, Visual Studio Code is actually based on Electron (also made by the same people that brought to you ATOM). Well lets dive right in! *All Tests are done on macOS Sierra

Brackets Opened File

Screen Shot 2017-03-14 at 1.13.01 PM

Visual Studio Code Opened File

Screen Shot 2017-03-14 at 1.13.25 PM.png

Atom Opened File

Screen Shot 2017-03-14 at 1.16.59 PM.png

Screen Shot 2017-03-13 at 3.27.21 PM.png

Screen Shot 2017-03-13 at 3.36.46 PMScreen Shot 2017-03-13 at 3.39.35 PM

How to open a file, a folder of files (e.g., an entire project)

Opening up a file(s), folder(s) or an entire project in Atom is a snap. By simply Clicking on File. You are greeted with:

Screen Shot 2017-03-13 at 3.45.36 PM.png

open_project.gif

Navigating this dropdown gives you access to all your file and project opening needs.

How to change your indent from tabs to spaces

Indent changes can be done by navigating to Edit -> Lines and choosing whether to indent,  or auto.

indents.png

To modify indents all together, simply navigate to preferences by clicking on Atom in the menu bar and clicking Preferences. Once in preferences, select the Editor tab and scroll down you should see:

tab_spaces.pngHow to open the editor from the command line

In order to complete this I had to install shell commands found in the Atom Editor.

install_shell_commands.png

Opening terminal was then straight forward: > atom file.extension 

open_terminal.gif

How to find things (e.g., a string, a file)

Searching in Atom is just like any program, pressing COMMAND + F will pop up the search fields and options available (such as replace):

finding1

If you would choose to replace here is what you get (Red: before, Green: after):

found_replace

Finding in directory in directory is simple as well by right clicking in the file tree folder you can select “Search in Directory”.

searchEveryfile

How to Split Panes

Splitting the screen into multiple editors and views is as simple as a drag:

split_tabs.gif

If you prefer a different approach you can also try selecting View from the menu bar and navigating to Panes, here are given the following list of options:

Screen Shot 2017-03-13 at 4.19.28 PM.png

How to install Editor Extensions

Installing extensions was a breeze by heading over o preferences, choosing an extension and click install. Atom will do everything for you, from here you can add more themes and functions to Atom.

extensions.gif

What are some common key bindings

Screen Shot 2017-03-13 at 6.09.40 PM.png

Searching for keybindings, changing keybindings and even narrowing down commands is easy in Atom.

Changing these binding is done by clicking the copy to cliboard button for the keystroke and then pasting it in the keymap file as seen below:

Screen Shot 2017-03-13 at 6.13.06 PM.png

Autocomplete for coding HTML, JS, CSS, etc

Autocompleting is enabled, disabled and added from the Preferences and the packages tab. Here you can add new Autocompletes, disable others or even uninstall them. Below you can see Atom giving you an example:):

auto

Atom is very easy to use, and tries hard to be minimalistic. I’m currently still getting use to coding in it, but after disabling the “Greeting” message that kept popping up at the beginning (which again is disabled from Preferences). I did have a couple of hiccups here and there with performance but that could have been a memory leak elsewhere because a restart seemed to have fixed the problem.

Screen Shot 2017-03-13 at 6.28.19 PM.png

Screen Shot 2017-03-13 at 3.36.21 PM.png

How to open a file, a folder of files (e.g., an entire project)

Similar to Atom, simply click file and find your file. Files and project folders can be opened.

visual_openfile.gif

How to change your indent from tabs to spaces

At first I thought it was a little less professional looking and that it might be a little harder to use than Atom, but really I ended up liking it a lot more because it was so easy to search and modify the json file that hosts the settings.

Screen Shot 2017-03-13 at 6.45.28 PM.png

How to open the editor from the command line

Now as for terminal, I simply had to install the shell commands by clicking ⇧⌘P and then searching for shell command install. The website has a full easy to follow tutorial here.

visual_shell.gif

How to find things (e.g., a string, a file)

Screen Shot 2017-03-13 at 6.49.14 PM.png

Much like Atom, you can use ⌘F o search for items within the files, or use the more advanced search function that is built into the file tree. You can search through multiple opened files and even replace code snippets.

How to Split Panes

visual_splitting.gif

Splitting panes is just as easy as Atom, but it’s a little tedious that a pop up comes up to baby your navigation. Simply clicking “Don’t Show again” fixes these tedious pop ups.

How to install Editor Extensions

visual_ext.gif

I ended up liking the extensions tab for VSC because it was easier to read and to live modify the code while installing extensions. It also seems to have more support at first glance.

What are some common key bindings

visual_binding.gif

Again it was similar to Atom, but just felt like it had more choices and just felt easier to both browse and modify. Its simple to follow and edit all the key bindings that you require to get your work done.

Autocomplete for coding HTML, JS, CSS, etc

Screen Shot 2017-03-13 at 7.03.34 PM.png

The Auto complete was just as easy to navigate but the multi pane design made it fast and easy to jump between panes

Visual Studio Code just feels more fleshed to me, and it feels easier to navigate. The program also felt more stable and opened files quicker especially if they were compressed. The Editor feels a lot like Notepad++, but right now I’m actually liking it more.

Extensions in Visual Studio Code

Screen Shot 2017-03-13 at 7.11.24 PM.png

Simply clicking the “…” settings button will pop this view out and you can choose from the most popular extensions available. I ended up choosing:

  • C/C++

Screen Shot 2017-03-13 at 7.13.30 PM.png

This is for all my C/C++ coding needs, all features and available services can be seen above.

  • Atom One Dark Theme

Screen Shot 2017-03-13 at 7.15.07 PM.png

A nice new color scheme that is easy and nice on the eyes.

  • mssql

visual_ext_sql.gif

I always like to use MySQL and i’m always looking for new ways to code and modify my tables.

  • npm

Screen Shot 2017-03-13 at 7.18.06 PM.png

  • Terminal

visual_term

Just a nifty tools to not have to keep swiping back and fourth.

Conclusion

And the winner? Well neither came with more or less features from what I saw but I just liked using Visual Studio Code more. It felt complete and easy to navigate without me using google search or youtube videos to find out how to do certain things.

1476486526543.png


by theoduleblog at March 12, 2017 06:35 PM


Max Fainshtein

Lab 6

For this lab I selected Atom & Nuclide as my two editors.

Atom

Download link

https://atom.io/

Reason for picking

I chose this editor due to the fact that I never heard of it before and doing a little bit of research it seemed pretty cool and customize-able.

Supported languages

Using the default plugins the following language support consists of HTML,CSS, Less, Sass, GitHub Flavored Markdown, C/C++, C#, Go, Java, Objective-C, JavaScript, JSON, CoffeeScript, Python, PHP, Ruby, Ruby on Rails, shell script, Clojure, Perl, Git, Make, Property List (Apple), TOML, XML, YAML, Mustache, Julia & SQL.

Adjusting settings

in the settings tab which is accessible from File>Settings, or using the shortcut “Ctrl+,”, you can find and adjust all the settings and change them as you see fit. Including font size, tab length, auto indentation, etc. You will also be able to find all the keybindings with instructions on how to change them if you desire.

settings2

Command line use

You also have the option to open a project or file using the command line. First you need to configure your environment variables to specify the path to the bin file located at Users/{Users name}/Local/atom/bin. Once this is done you simply need to navigate to the file you wish to edit and use the command “atom {filename}”

Managing windows & searching between files

Moving and closing files is very easy in Atom. By right-clicking on the file headers at the top of the window you will get the access to move the file to another window or closing the file or other files with simple and easy to understand options. Through the Ctrl+f key you gain access to functionality to find or replace words with a couple of ways to adjust the search which includes case sensitivity, whole word matching, and section search which checks only the highlighted sections of the file. You also have the option of using the command Ctrl+Shift+f to open up a find/replace window that allows for multiple file searching, which can be narrowed down by folder or file type e.g *.js which would only check the JavaScript files in the project.

Useful commands
Ctrl+g : which goes to a line

Ctrl+end : moves to bottom

Ctrl+home : moves to top

Ctrl+Shift+k : delete current line

settings

Nuclide

This editor is actually a package which is added onto atom. I wanted to check this one out to see what it changes with atom.

Supported languages

Using the default plugins the following language support consists of HTML,CSS, Less, Sass, GitHub Flavored Markdown, C/C++, C#, Go, Java, Objective-C, JavaScript, JSON, CoffeeScript, Python, PHP, Ruby, Ruby on Rails, shell script, Clojure, Perl, Git, Make, Property List (Apple), TOML, XML, YAML, Mustache, Julia & SQL.

useful commands
Ctrl+down : move down to next line

Ctrl+up : move up to previous line

Ctrl+j : join lines together

installing packages

To install packages you simple need to go settings via File>Settings then go to the packages tab. Then you just need to type in what you are looking for in the search field or add something from the featured section below the search field.

Auto-Complete

In case you want to disable autocomplete the way to access the settings requires you to find the package autocomplete-plus which you can find under settings(File>Settings>packages) and all information regarding autocomplete which can be tweaked to your personal preference.

Opening projects

To open a project you either need to go to File>Open Folder or use the short cut Ctrl+Shift+o, then you just need to navigate to your project folder and hit open and a folder hierarchy of your project.

Extensions

I will be discussing atom’s extensions because Nuclide is a extension of atom itself.

Nuclide: advertises the use of a php debugger, a context view which allows for easy navigation between symbols and their definitions, as well as the functionality to connect to a remote server to edit files.

 atom-beautify: this extension beautifies HTML, CSS, JavaScript, PHP, Python, Ruby, Java, C, C++, C#, Objective-C, CoffeeScript, TypeScript, Coldfusion, SQL, and more in Atom.

spell-check: highlights misspelled words and shows possible corrections

Language-javascript: This is a included extension that allows the support of javascript in Atom

turbo-javascript: a collection of commands and snippets for optimizing Javascript and Typescript development productivity

 

 


by mfainshtein4 at March 12, 2017 03:53 AM


Arsalan Khalid

Deciding Between Ethereum or Hyperledger : Contributing Towards Open Source Development

So…In the Blockchain world there’s a ton of super cool development opportunities, and I would go as far as saying life changing opportunities. Currently the Blockchain open source development community is surrounded by 2 similar yet different technology’s. Those two are Hyper Ledger and Ethereum.

See, Ethereum is a fully running and public Blockchain, similar to Bitcoin, but many people would say it is the improved offspring of Bitcoin. Whereas, Hyper Ledger is an attempt at organizing an ‘enterprise’, open-source, and consortium for the creation of business Blockchain applications. I say attempt because, although their development and growth has been pretty active, anything involving ‘enterprise’ and ‘open-source’ I find pretty scary. I draw examples from RedHat & any CRM ever, but even then I think more of the scary fact is that it is a consortium of the world’s largest corporation's as well as FinTech’s in the Blockchain space. I’m just skeptical to a degree only because, can these organizations really work together to create a well functioning, and applicable enterprise Blockchain? Part of the reason is, as a consultant from Accenture I’ve build a fair bit of a resume working with some of these organizations and even FinTech’s that it has me questioning whether there will be real input and organizational structure within this consortium. Not only that, but will their development resources be able to correctly pool together to actually build the product? Enterprise products can only go so far with simply open source contributions, which is currently how Fabric is moving forward at the moment. Fabric being Hyper Ledger’s main Blockchain infrastructure for any noobie out there.

Anyways, regardless of the fact that Hyper Ledger is an enterprise consortium for the development of a few open source Blockchain technologies, I think it still definitely has some potential. As I mentioned earlier, there main ‘open source’ product is Fabric, which is essentially a private Blockchain any developer or person can setup and run smart contracts, crypto currencies, and other standard DT stuff. Overall, I think there’s a definite difference from the fundamental Ethereum protocol and platform. The main difference being that Ethereum is a fully running public Blockchain, which facilitates the exchange of their crypto-currency Ether, which you can of course convert to other CC’s using Exchanges. Not only that, but it has everything else such as DAOs, Smart Contracts, and the ability to create a custom private Blockchain. I would also offer the caveat of the fact that just recently the Ethereum foundation made a partnership with major consulting and financial organizations, to also build a consortium behind creating an enterprise/banking grade Ethereum platform.

So all in all, safe to say both platforms are both magnificent in their feats and capabilities. But I think, as a developer it’s important to make the distinction in terms of what’s best from a growth and open source development perspective. Now, from looking at the Hyperledger & Fabric docs:
https://www.hyperledger.org/community/projects
https://hyperledger-fabric.readthedocs.io/en/latest/

This definitely isn’t easy stuff, just to get the dev environment setup you have to:

  1. Setup a couple Docker containers
  2. Setup Golang
  3. Bootstrap VM using Vagrant
  4. Build & Run Repo

I would say these steps are relatively standard, but it’s Golang is what really pisses me off here. I mean don’t get me wrong, it’s a pretty sick, innovative, and a functionally written language, but does the average person have experience with it? Hyperledger has taken this into account and is working on a few SDK’s like Java and Node, but are still in their infancy stages. Overall, this just feels and seems like a pretty big rabbit hole to setup, deploy, contribute, and merge. I would also add, the open developer community on gitter, slack, etc. is no where as big as on Ethereum. So let’s do a similar check with Ethereum and the open source contribution possibilities there too.

First off, Ethereum comes with many GUI applications, like the wallet, command line tools (not gui but still cool), and a Blockchain GUI app. Not only that, but you can setup clients to connect to the Blockchain, they’ve created web socket libraries and clients that may be deployed through Windows, OSX, etc. to connect & perform development. Docs linked here:

Introduction - Ethereum Homestead 0.1 documentation

Now here’s the thing, even though the platform is live and running it’s still incredibly massive. In my case, as a beginner open source developer and intermediate Blockchain developer I have to make the call between a project that will allow me to get involved as well as work on tasks that are of enough important that allows me to grow and gain some what of a reputation. The other thing is, I think I need a project that gives me enough support in terms of a community, manageable issues/tasks, and an easy to start dev environment.

So…I found exactly that:

ethereum/ethereumj

It’s a Java Client for Ethereum and you can easily setup and deploy using Intellij IDEA (which I’ll also be doing a blog post on with me testing out EMACS too). It also has a pretty decent and active community on gitter:
http://ethereumj.io/
https://gitter.im/ethereum/ethereumj

Overall, I think I’m going to start with setting up Etherem J, running the client, and figuring out how I can contribute, as well as some great issues for me to look into.

Thanks for joining me on the journey.
Cheers,
Arsalan

by Arsalan Khalid at March 12, 2017 12:35 AM

March 11, 2017


Margaryta Chepiga

Second Release

Why would people challenge themselves? Why would they push their own limits? Why did I decide to fix the bug in javaScript, the language that I am not really familiar with?

The answer is always the feeling of accomplishment. Once a wise man told me that in order to be better, in order to improve yourself you need to push yourself though the limit. By doing things that you already know, you are not going to learn much. Practice makes perfect, but with the remark that you do more than practice.

Back to the topic. As you know, I was working on this bug for my second release.

Was it a struggle for me? Yes, it was. I spend at least 5 full days in order to fix the bug. And I consider myself as a lucky person, cause I got all the necessary information for the fix provided on the issue page on github. Which means that in those 5 days I haven’t spend much time on figuring where is the function, how everything works, what gets affected. It was provided to me.

The problems appeared when I was already working on the bug, and one time I just couldn’t run the brackets server. Honestly I don’t want to get into much details about it, since I have no idea why it wasn’t working and because I couldn’t fix it. I struggled with the server, wasn’t able to test anything for 2 days. I couldn’t fix it. So I started all over again. Removed brackets and thimble from my local machine and just installed everything all over again. Which surprisingly resolved the problem. What I learned from this situation is that sometimes is much faster to reinstall the whole thing than thing to make your server work 😀 Which I honestly doesn’t think is a good idea, but probably sometimes you just don’t have any other choice, especially if you are limited to time.

Was it hard to fix a bug?

It is hard to say… One one side logically I knew what I was suppose to do ( again thanks for the instructions) which makes it easy. On another side, my logic was not working in JavaScript. So for me it was a matter of time. I was just keep trying. I had tried at least 20-30 different ways to fix the bug. Interesting thing happened there actually. Once, one of the ways actually worked. Which was amazing! I was about to do a pull request, when I decided to make a gif image in order to show the fix. But I couldn’t. The fix that was working 10 minutes ago, was not working anymore. How come?! I literally thought that I am getting crazy or something. How come it worked and them stopped in 10 minutes?! I took me some time to figure out that the problem was in the logic. What have I learn? If your fix works only once, it is not a fix! It is still a bug, and most probably you have a problem in your logic.

Eventually I was able to fix the bug. I can’t say that it was easy, and at the same time I can’t say it was extremely hard. What I can say without any doubs is that it was extremely time consuming. I don’t really know JavaScript, I don’t really undersatnd it’s logic. And I think that was the main problem for me. That is the reason it took me so much time and effort. On the bright side, eventually I fixed the bug, the logic that I eventually came to works perfectly. Can I even ask for more?

 


by mchepigaosd600 at March 11, 2017 10:33 PM


Ray Gervais

Compiler Vectorization in Assembly

SPO600 Week Six Deliverable

Introduction

For this exercise, we were tasked with the following instructions, cautioned that only ones with patience would achieve completion of this lab with their sanity intact:

  1. Write a short program that creates two 1000-element integer arrays and fills them with random numbers, then sums those two arrays to a third array, and finally sums the third array to a long int and prints the result.
  2. Compile this program on an aarch64 machine in such a way that the code is auto-vectorized.
  3. Annotate the emitted code (i.e., obtain a disassembly via objdump -d and add comments to the instructions in <main> explaining what the code does).
  4. Review the vector instructions for AArch64. Find a way to scale an array of sound samples (see Lab 5) by a factor between 0.000-1.000 using SIMD.

Step 1

Below, I’ve included the simplistic C code which would achieve the desired functionality. It’s very easy to read, with not complexity outside the realms of a standard math incremental operation, and the ever so popular addition operator. Included, is also a random number generator being drive by stdlib’s rand() function. Originally, I had the the calculations relating to the c array to be in a separate for loop, with the result calculation occurring in that for statement as well. This was moved into the loop used by variables a and b, making the program run O(n) instead of O(2n).

C Code

#include <stdio.h>
#include <time.h>
#include <stdlib.h>

#define SIZE 1000

int main() {
        int a[SIZE]
            , b[SIZE]
            , c[SIZE];

        time_t t;
        long int result = 0;

        // Init Random Number Generator
        srand((unsigned) time(&t));

        int i;
        for(i = 0; i < SIZE; i++) {
                // Fill Arrays with Random Numbers
                a[i] =  rand() % 10;
                b[i] =  rand() % 10;

                // Sum Array Values into C[]
                c[i] = a[i] + b[i];

                // result stores final calculation
                result += c[i];
        }

        printf("Final Result: %ld \n", result);
        return 0;
}

Step 2

To compile the application in such a way that the compiler utilizes advanced optimization techniques, I used the -O3 argument which incorporates vectorization where possible by default. Had I not wanted to use O3, I could instead use the -ftree-vectorize which provides the same desired optimization.

gcc -o lab06 -O3 Vector.c

What is Auto-Vectorization?

The great wonder which is Wikipedia has the following explanation, which I shamelessly have posted below to supplement the answer to this question:

Automatic vectorization, in parallel computing, is a special case of automatic parallelization, where a computer program is converted from a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once.

Step 3

Below is my included analysis of the lab06 file, including my comments on the right side. Viewing of such data was made possible by using objdump -d command, and then for editing purposes routing said command’s output into an empty .asm file. I will not deny that my analysis has many plot holes, full of assumptions that are incorrect or misread Assembly code or incorrect parsing of arguments. Regardless of the vectorization, Machine Language is the closest this web developer has ever gotten to the CPU and hardware itself. Would I say I enjoy reading Assembly Code? No. Do I see where it is an invaluable source of optimization prowess which rivals even the best C code? Yes. But, I’d be a fool to say that it is my cup of tea.Without further discussion of my failings related to software optimization, analysis and the beast which is .ASM, here is my analysis.

Assembly Code

lab6:     file format elf64-littleaarch64


Disassembly of section .init:

00000000004004a0 :
  4004a0:       a9bf7bfd        stp     x29, x30, [sp,#-16]!
  4004a4:       910003fd        mov     x29, sp
  4004a8:       9400005c        bl      400618 
  4004ac:       a8c17bfd        ldp     x29, x30, [sp],#16
  4004b0:       d65f03c0        ret

Disassembly of section .plt:

00000000004004c0 <time@plt-0x20>:
  4004c0:       a9bf7bf0        stp     x16, x30, [sp,#-16]!
  4004c4:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  4004c8:       f945be11        ldr     x17, [x16,#2936]
  4004cc:       912de210        add     x16, x16, #0xb78
  4004d0:       d61f0220        br      x17
  4004d4:       d503201f        nop
  4004d8:       d503201f        nop
  4004dc:       d503201f        nop

00000000004004e0 <time@plt>:
  4004e0:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  4004e4:       f945c211        ldr     x17, [x16,#2944]
  4004e8:       912e0210        add     x16, x16, #0xb80
  4004ec:       d61f0220        br      x17

00000000004004f0 <__libc_start_main@plt>:
  4004f0:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  4004f4:       f945c611        ldr     x17, [x16,#2952]
  4004f8:       912e2210        add     x16, x16, #0xb88
  4004fc:       d61f0220        br      x17

0000000000400500 <rand@plt>:
  400500:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  400504:       f945ca11        ldr     x17, [x16,#2960]
  400508:       912e4210        add     x16, x16, #0xb90
  40050c:       d61f0220        br      x17

0000000000400510 <__gmon_start__@plt>:
  400510:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  400514:       f945ce11        ldr     x17, [x16,#2968]
  400518:       912e6210        add     x16, x16, #0xb98
  40051c:       d61f0220        br      x17

0000000000400520 <abort@plt>:
  400520:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  400524:       f945d211        ldr     x17, [x16,#2976]
  400528:       912e8210        add     x16, x16, #0xba0
  40052c:       d61f0220        br      x17

0000000000400530 <srand@plt>:
  400530:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  400534:       f945d611        ldr     x17, [x16,#2984]
  400538:       912ea210        add     x16, x16, #0xba8
  40053c:       d61f0220        br      x17

0000000000400540 <printf@plt>:
  400540:       90000090        adrp    x16, 410000 <__FRAME_END__+0xf698>
  400544:       f945da11        ldr     x17, [x16,#2992]
  400548:       912ec210        add     x16, x16, #0xbb0
  40054c:       d61f0220        br      x17

Disassembly of section .text:

0000000000400550 :
  400550:       a9bc7bfd        stp     x29, x30, [sp,#-64]!            # Store array A as a pair in x29, x30
  400554:       910003fd        mov     x29, sp                         # Store begining of the array
  400558:       9100e3a0        add     x0, x29, #0x38                  # Load X0 with X29 + #x38
  40055c:       a9025bf5        stp     x21, x22, [sp,#32]              # Store array B as a pair in x21, x22
  400560:       a90153f3        stp     x19, x20, [sp,#16]              # Store array C as a pair in x19, x20
  400564:       97ffffdf        bl      4004e0 <time@plt>               # Call time@plt branch (argument for srand in c)
  400568:       97fffff2        bl      400530 <srand@plt>              # Call srand@plt branch (init srand in c)
  40056c:       52807d13        mov     w19, #0x3e8                     # MAX defined value from #0x3e8 (1000) into w19
  400570:       d2800015        mov     x21, #0x0                       # Init of int i to 0, stored into x21
  400574:       52800156        mov     w22, #0xa                   

  // Loop Starts: Fill Array A, B, with Random Numbers
  400578:       97ffffe2        bl      400500 <rand@plt>               # Call rand@plt branch
  40057c:       2a0003f4        mov     w20, w0                         # Move data from w0 to w20  
  400580:       97ffffe0        bl      400500 <rand@plt>               # Call rand@plt branch
  400584:       1ad60e83        sdiv    w3, w20, w22                    # Operation: w3 = w20 / w22
  400588:       1ad60c02        sdiv    w2, w0, w22                     # Operation: w2 = w0 / w22
  
  40058c:       0b030863        add     w3, w3, w3, lsl #2              # Operation: w3 = w3 + shift(w3, left-shift offset by 2)
  400590:       0b020842        add     w2, w2, w2, lsl #2              # Operation: w2 = w20 + shift(w3, left-shift offset by 2)
  400594:       4b030694        sub     w20, w20, w3, lsl #1            # Operation: w20 = 20 - shift(w3, left-shift offset by 1)
  400598:       4b020400        sub     w0, w0, w2, lsl #1              # Operation: w0 = w0 - shift(w3, left-shift offset by 1)
  40059c:       0b000280        add     w0, w20, w0                     # Operation: w0 = w20 + w0 (Result += c[i])
  4005a0:       71000673        subs    w19, w19, #0x1                  # Opeartion: w19 = 19 - #0x1 W/ Flag Set
  4005a4:       8b20c2b5        add     x21, x21, w0, sxtw              # Operation: x21 = x21 + shift(w0, sign extract 64-bit)
  4005a8:       54fffe81        b.ne    400578 <main+0x28>
  // End of Loop

  // Print Final Result
  4005ac:       90000000        adrp    x0, 400000         # Calculation of address x0 access to  memory
  4005b0:       aa1503e1        mov     x1, x21                         # Move data from x21 into x1
  4005b4:       911f4000        add     x0, x0, #0x7d0                  # Operation: x0 = x0 + #0x7d0
  4005b8:       97ffffe2        bl      400540 <printf@plt>             # Call: printf@plt 
  4005bc:       2a1303e0        mov     w0, w19                         # Move data from w19 into w0
  4005c0:       a9425bf5        ldp     x21, x22, [sp,#32]              # Load Pair x21, x22 into sp #32
  4005c4:       a94153f3        ldp     x19, x20, [sp,#16]              # Load pair x19, x20 into sp #16
  4005c8:       a8c47bfd        ldp     x29, x30, [sp],#64              # Load pair x29, x30 into sp #64
  4005cc:       d65f03c0        ret                                     # Return (Exit)

00000000004005d0 :
  4005d0:       d280001d        mov     x29, #0x0                       // #0
  4005d4:       d280001e        mov     x30, #0x0                       // #0
  4005d8:       910003fd        mov     x29, sp
  4005dc:       aa0003e5        mov     x5, x0
  4005e0:       f94003e1        ldr     x1, [sp]
  4005e4:       910023e2        add     x2, sp, #0x8
  4005e8:       910003e6        mov     x6, sp
  4005ec:       580000a0        ldr     x0, 400600 <_start+0x30>
  4005f0:       580000c3        ldr     x3, 400608 <_start+0x38>
  4005f4:       580000e4        ldr     x4, 400610 <_start+0x40>
  4005f8:       97ffffbe        bl      4004f0 <__libc_start_main@plt>
  4005fc:       97ffffc9        bl      400520 <abort@plt>
  400600:       00400550        .word   0x00400550
  400604:       00000000        .word   0x00000000
  400608:       00400730        .word   0x00400730
  40060c:       00000000        .word   0x00000000
  400610:       004007a8        .word   0x004007a8
  400614:       00000000        .word   0x00000000

0000000000400618 :
  400618:       90000080        adrp    x0, 410000 <__FRAME_END__+0xf698>
  40061c:       f945b000        ldr     x0, [x0,#2912]
  400620:       b4000040        cbz     x0, 400628 <call_weak_fn+0x10>
  400624:       17ffffbb        b       400510 <__gmon_start__@plt>
  400628:       d65f03c0        ret
  40062c:       00000000        .inst   0x00000000 ; undefined

0000000000400630 :
  400630:       90000081        adrp    x1, 410000 <__FRAME_END__+0xf698>
  400634:       90000080        adrp    x0, 410000 <__FRAME_END__+0xf698>
  400638:       912f0021        add     x1, x1, #0xbc0
  40063c:       a9bf7bfd        stp     x29, x30, [sp,#-16]!
  400640:       912f0000        add     x0, x0, #0xbc0
  400644:       91001c21        add     x1, x1, #0x7
  400648:       910003fd        mov     x29, sp
  40064c:       cb000021        sub     x1, x1, x0
  400650:       f100383f        cmp     x1, #0xe
  400654:       54000068        b.hi    400660 <deregister_tm_clones+0x30>
  400658:       a8c17bfd        ldp     x29, x30, [sp],#16
  40065c:       d65f03c0        ret
  400660:       58000081        ldr     x1, 400670 <deregister_tm_clones+0x40>
  400664:       b4ffffa1        cbz     x1, 400658 <deregister_tm_clones+0x28>
  400668:       d63f0020        blr     x1
  40066c:       17fffffb        b       400658 <deregister_tm_clones+0x28>
        ...

0000000000400678 :
  400678:       90000080        adrp    x0, 410000 <__FRAME_END__+0xf698>
  40067c:       90000081        adrp    x1, 410000 <__FRAME_END__+0xf698>
  400680:       912f0000        add     x0, x0, #0xbc0
  400684:       912f0021        add     x1, x1, #0xbc0
  400688:       cb000021        sub     x1, x1, x0
  40068c:       a9bf7bfd        stp     x29, x30, [sp,#-16]!
  400690:       9343fc21        asr     x1, x1, #3
  400694:       910003fd        mov     x29, sp
  400698:       8b41fc21        add     x1, x1, x1, lsr #63
  40069c:       9341fc21        asr     x1, x1, #1
  4006a0:       b5000061        cbnz    x1, 4006ac <register_tm_clones+0x34>
  4006a4:       a8c17bfd        ldp     x29, x30, [sp],#16
  4006a8:       d65f03c0        ret
  4006ac:       580000a2        ldr     x2, 4006c0 <register_tm_clones+0x48>
  4006b0:       b4ffffa2        cbz     x2, 4006a4 <register_tm_clones+0x2c>
  4006b4:       d63f0040        blr     x2
  4006b8:       17fffffb        b       4006a4 <register_tm_clones+0x2c>
  4006bc:       d503201f        nop
        ...

00000000004006c8 :
  4006c8:       a9be7bfd        stp     x29, x30, [sp,#-32]!
  4006cc:       910003fd        mov     x29, sp
  4006d0:       f9000bf3        str     x19, [sp,#16]
  4006d4:       90000093        adrp    x19, 410000 <__FRAME_END__+0xf698>
  4006d8:       396ef260        ldrb    w0, [x19,#3004]
  4006dc:       35000080        cbnz    w0, 4006ec <__do_global_dtors_aux+0x24>
  4006e0:       97ffffd4        bl      400630 
  4006e4:       52800020        mov     w0, #0x1                        // #1
  4006e8:       392ef260        strb    w0, [x19,#3004]
  4006ec:       f9400bf3        ldr     x19, [sp,#16]
  4006f0:       a8c27bfd        ldp     x29, x30, [sp],#32
  4006f4:       d65f03c0        ret

00000000004006f8 :
  4006f8:       a9bf7bfd        stp     x29, x30, [sp,#-16]!
  4006fc:       910003fd        mov     x29, sp
  400700:       90000080        adrp    x0, 410000 <__FRAME_END__+0xf698>
  400704:       f944c001        ldr     x1, [x0,#2432]
  400708:       91260000        add     x0, x0, #0x980
  40070c:       b4000081        cbz     x1, 40071c <frame_dummy+0x24>
  400710:       580000c1        ldr     x1, 400728 <frame_dummy+0x30>
  400714:       b4000041        cbz     x1, 40071c <frame_dummy+0x24>
  400718:       d63f0020        blr     x1
  40071c:       a8c17bfd        ldp     x29, x30, [sp],#16
  400720:       17ffffd6        b       400678 
  400724:       d503201f        nop
        ...

0000000000400730 :
  400730:       a9bc7bfd        stp     x29, x30, [sp,#-64]!
  400734:       910003fd        mov     x29, sp
  400738:       a90153f3        stp     x19, x20, [sp,#16]
  40073c:       a90363f7        stp     x23, x24, [sp,#48]
  400740:       90000094        adrp    x20, 410000 <__FRAME_END__+0xf698>
  400744:       90000098        adrp    x24, 410000 <__FRAME_END__+0xf698>
  400748:       9125c318        add     x24, x24, #0x970
  40074c:       9125e294        add     x20, x20, #0x978
  400750:       cb180294        sub     x20, x20, x24
  400754:       9343fe94        asr     x20, x20, #3
  400758:       a9025bf5        stp     x21, x22, [sp,#32]
  40075c:       2a0003f7        mov     w23, w0
  400760:       aa0103f6        mov     x22, x1
  400764:       aa0203f5        mov     x21, x2
  400768:       d2800013        mov     x19, #0x0                       // #0
  40076c:       97ffff4d        bl      4004a0 
  400770:       b4000134        cbz     x20, 400794 <__libc_csu_init+0x64>
  400774:       f8737b03        ldr     x3, [x24,x19,lsl #3]
  400778:       2a1703e0        mov     w0, w23
  40077c:       aa1603e1        mov     x1, x22
  400780:       aa1503e2        mov     x2, x21
  400784:       d63f0060        blr     x3
  400788:       91000673        add     x19, x19, #0x1
  40078c:       eb14027f        cmp     x19, x20
  400790:       54ffff21        b.ne    400774 <__libc_csu_init+0x44>
  400794:       a94153f3        ldp     x19, x20, [sp,#16]
  400798:       a9425bf5        ldp     x21, x22, [sp,#32]
  40079c:       a94363f7        ldp     x23, x24, [sp,#48]
  4007a0:       a8c47bfd        ldp     x29, x30, [sp],#64
  4007a4:       d65f03c0        ret

00000000004007a8 :
  4007a8:       d65f03c0        ret

Disassembly of section .fini:

00000000004007ac :
  4007ac:       a9bf7bfd        stp     x29, x30, [sp,#-16]!
  4007b0:       910003fd        mov     x29, sp
  4007b4:       a8c17bfd        ldp     x29, x30, [sp],#16
  4007b8:       d65f03c0        ret

Thoughts

It seems based on my analysis, that a pivotal operation is the storing of variables into the registers as pairs, utilizing STP for said operation. This then allows for iterations between 8 elements of the array at a time. How the compiler is choosing to vectorized is still beyond me, but that’s what the lesson is for? Right?. Regardless, I can understand basic Assembly which puts me further knowledge wise than I was in the previous weeks.

Step 4

Without modifying the previous lab’s code to utilize the auto vectorized features of the compiler, along with inline-assembly code for further optimizations, here are some thoughts which were collected upon reviewing with peers their ideas along with my own.

  1. Utilize DUP to duplicate the volume factor into a scalar vector register. Wikipedia describes scalar registers as follows:

    A scalar processor processes only one datum at a time, with typical data items being integers or floating point numbers). A scalar processor is classified as a SISD processor (Single Instructions, Single Data) in Flynn’s taxonomy.

  2. Store the ‘Sample’ Data into a register using LD1. LD1 is an instruction which loads multiple 1-element structures into a vector register.

by RayGervais at March 11, 2017 05:56 PM

March 10, 2017


Len Isac

glibc – proposed approach to optimize difftime/subtract

This is a continuation to my previous post on choosing a glibc function that could potentially be optimized. Now I’ll discuss my proposed approach for potential optimization.

difftime

difftime has a few handlers for calculating doubles and long doubles, but for any other types it will simply subtract the larger time value from the smaller one and return the result.

Let’s look at difftime first:

/* Return the difference between TIME1 and TIME0.  */
double
__difftime (time_t time1, time_t time0)
{
  /* Convert to double and then subtract if no double-rounding error could
     result.  */

  if (TYPE_BITS (time_t) <= DBL_MANT_DIG
      || (TYPE_FLOATING (time_t) && sizeof (time_t) < sizeof (long double)))
    return (double) time1 - (double) time0;

  /* Likewise for long double.  */

  if (TYPE_BITS (time_t) <= LDBL_MANT_DIG || TYPE_FLOATING (time_t))
    return (long double) time1 - (long double) time0;

  /* Subtract the smaller integer from the larger, convert the difference to
     double, and then negate if needed.  */

  return time1 < time0 ? - subtract (time0, time1) : subtract (time1, time0);
}

The IF condition for doubles does not contain any significantly expensive operations (i.e. multiply, divide), and since it doesn’t, it may not be necessary to change anything here, but, we know that if the first condition before the OR is not met, we won’t need to execute the second condition, so this could also be written as:

if (TYPE_BITS (time_t) <= DBL_MANT_DIG) {return (double) time1 - (double) time0;}
if (TYPE_FLOATING (time_t) && sizeof (time_t) < sizeof (long double))) {return (double) time1 - (double) time0;}

Since the first condition is the smaller of the two, we test it first and immediately return our result if the condition is met. If not, we can then check the next slightly larger condition.

We can apply a similar approach for the second condition:

if (TYPE_FLOATING (time_t)) {return (long double) time1 - (long double) time0;}
if (TYPE_BITS (time_t) <= LDBL_MANT_DIG) {return (long double) time1 - (long double) time0;}

Something else I noticed inside the __difftime function was the checks for double and long double were always returning time1 minus time0 regardless of which was the larger value. On my particular machine (x86_64), the second IF condition was true since TYPE_BITS(time_t) was lower than LDBL_MANT_DIG, so line 11 was being executed.

double 
__difftime (time_t time1, time_t time0)
{
  if (TYPE_BITS (time_t) <= DBL_MANT_DIG
      || (TYPE_FLOATING (time_t) && sizeof (time_t) < sizeof (long double))) {
    return (double) time1 - (double) time0;
  }

  if (TYPE_BITS (time_t) <= LDBL_MANT_DIG || TYPE_FLOATING (time_t)) {
    //return time1 < time0 ? (long double) time0 - (long double) time1 : (long double) 
    return (long double) time1 - (long double) time0;
  }

  return time1 < time0 ? - subtract (time0, time1) : subtract (time1, time0);
}

I wrote a small tester for this:

int main() {

    // test difftime function
    time_t time1 = time(NULL);
    time_t time0 = time(NULL) + 10;
    printf("time1 = %d\ntime0 = %d\n", time1, time0);
    double result;
    result = __difftime(time1, time0);
    printf("difftime(time1, time0) = %f\n", result);
    result = __difftime(time0, time1);
    printf("difftime(time0, time1) = %f\n", result);

    return 0;
}

Which outputs:

__difftime
time1 = 1489180977
time0 = 1489180987
difftime(time1, time0) = -10.000000
__difftime
time1 = 1489180987
time0 = 1489180977
difftime(time0, time1) = 10.000000

Both results should return 10, but we are missing the time1 < time0 comparison check for each of those conditions, so I included the ternary operators in both conditions:

double
__difftime (time_t time1, time_t time0)
{
  if (TYPE_BITS (time_t) <= DBL_MANT_DIG
      || (TYPE_FLOATING (time_t) && sizeof (time_t) < sizeof (long double))) {
    return time1 < time0 ? (double) time0 - (double) time1 : (double) time1 - (double)
  }

  if (TYPE_BITS (time_t) <= LDBL_MANT_DIG || TYPE_FLOATING (time_t)) {
    return time1 < time0 ? (long double) time0 - (long double) time1 : (long double) 
  }

  ...
}

New output:

__difftime
time1 = 1489181645
time0 = 1489181655
difftime(time1, time0) = 10.000000
__difftime
time1 = 1489181655
time0 = 1489181645
difftime(time0, time1) = 10.000000

subtract

This function is called for any number other than double or long double type. If the time_t type is not a signed type, then the function simply returns the result of time1 - time0. If time_t type is a signed type, handle optimization.


by Len Isac at March 10, 2017 10:03 PM


Wayne Williams

Project: Optimizing GNU's GLIBC Code

For our individual projects, we are combining the skills we have learned thus far and attempting to make some improvements to the C libraries we all use everyday. The C code that I will be attempting to analyze and create optimizations for is:

mktime.c


The purpose of mktime.c is to "Convert a 'struct tm' to a time_t value".

In other words, time_t is the computer's tracking of time in seconds that has transpired since January 1, 1970. Since this number would be a very large number of seconds, most humans cannot easily convert a given date into time_t. Thus, humans would write something like March 10, 2017, and mktime.c would take that human language for the date (struct tm) and convert it into the amount of seconds from the beginning of computer time until now.

The reason why I decided to look into this function is because I use functions concerning 'time' for most of my own projects. I also thought that I wanted to take a crack at a function that would be used quite often by programmers everywhere and make a real contribution. The mktime.c function was fairly long, so I thought I would probably find SOMETHING I could make better inside all that code.

Below I will display the entire function for mktime.c and highlight some of the lines of code that look like I could possibly do something optimal with.

 ----------------------------------------------

/* Convert a 'struct tm' to a time_t value.
   Copyright (C) 1993-2017 Free Software Foundation, Inc.
   This file is part of the GNU C Library.
   Contributed by Paul Eggert <eggert@twinsun.com>.

   The GNU C Library is free software; you can redistribute it and/or
   modify it under the terms of the GNU Lesser General Public
   License as published by the Free Software Foundation; either
   version 2.1 of the License, or (at your option) any later version.

   The GNU C Library is distributed in the hope that it will be useful,
   but WITHOUT ANY WARRANTY; without even the implied warranty of
   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
   Lesser General Public License for more details.

   You should have received a copy of the GNU Lesser General Public
   License along with the GNU C Library; if not, see
   <http://www.gnu.org/licenses/>.  */

/* Define this to have a standalone program to test this implementation of
   mktime.  */
/* #define DEBUG_MKTIME 1 */

#ifndef _LIBC
# include <config.h>
#endif

/* Assume that leap seconds are possible, unless told otherwise.
   If the host has a 'zic' command with a '-L leapsecondfilename' option,
   then it supports leap seconds; otherwise it probably doesn't.  */
#ifndef LEAP_SECONDS_POSSIBLE
# define LEAP_SECONDS_POSSIBLE 1
#endif

#include <time.h>

#include <limits.h>

#include <string.h> /* For the real memcpy prototype.  */

#if defined DEBUG_MKTIME && DEBUG_MKTIME
# include <stdio.h>
# include <stdlib.h>
/* Make it work even if the system's libc has its own mktime routine.  */
# undef mktime
# define mktime my_mktime
#endif /* DEBUG_MKTIME */

/* Some of the code in this file assumes that signed integer overflow
   silently wraps around.  This assumption can't easily be programmed
   around, nor can it be checked for portably at compile-time or
   easily eliminated at run-time.

   Define WRAPV to 1 if the assumption is valid and if
     #pragma GCC optimize ("wrapv")
   does not trigger GCC bug 51793
   <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51793>.
   Otherwise, define it to 0; this forces the use of slower code that,
   while not guaranteed by the C Standard, works on all production
   platforms that we know about.  */
#ifndef WRAPV
# if (((__GNUC__ == 4 && 4 <= __GNUC_MINOR__) || 4 < __GNUC__) \
      && defined __GLIBC__)
#  pragma GCC optimize ("wrapv")
#  define WRAPV 1
# else
#  define WRAPV 0
# endif
#endif

/* Verify a requirement at compile-time (unlike assert, which is runtime).  */
#define verify(name, assertion) struct name { char a[(assertion) ? 1 : -1]; }

/* A signed type that is at least one bit wider than int.  */
#if INT_MAX <= LONG_MAX / 2
typedef long int long_int;
#else
typedef long long int long_int;
#endif
verify (long_int_is_wide_enough, INT_MAX == INT_MAX * (long_int) 2 / 2);

/* Shift A right by B bits portably, by dividing A by 2**B and
   truncating towards minus infinity.  A and B should be free of side
   effects, and B should be in the range 0 <= B <= INT_BITS - 2, where
   INT_BITS is the number of useful bits in an int.  GNU code can
   assume that INT_BITS is at least 32.

   ISO C99 says that A >> B is implementation-defined if A < 0.  Some
   implementations (e.g., UNICOS 9.0 on a Cray Y-MP EL) don't shift
   right in the usual way when A < 0, so SHR falls back on division if
   ordinary A >> B doesn't seem to be the usual signed shift.  */
#define SHR(a, b)                                               \
  ((-1 >> 1 == -1                                               \
    && (long_int) -1 >> 1 == -1                                 \
    && ((time_t) -1 >> 1 == -1 || ! TYPE_SIGNED (time_t)))      \    CODE USED OFTEN!!
   ? (a) >> (b)                                                 \
   : (a) / (1 << (b)) - ((a) % (1 << (b)) < 0))

/* The extra casts in the following macros work around compiler bugs,
   e.g., in Cray C 5.0.3.0.  */

/* True if the arithmetic type T is an integer type.  bool counts as
   an integer.  */
#define TYPE_IS_INTEGER(t) ((t) 1.5 == 1)

/* True if negative values of the signed integer type T use two's
   complement, or if T is an unsigned integer type.  */
#define TYPE_TWOS_COMPLEMENT(t) ((t) ~ (t) 0 == (t) -1)

/* True if the arithmetic type T is signed.  */
#define TYPE_SIGNED(t) (! ((t) 0 < (t) -1))

/* The maximum and minimum values for the integer type T.  These
   macros have undefined behavior if T is signed and has padding bits.
   If this is a problem for you, please let us know how to fix it for
   your host.  */
#define TYPE_MINIMUM(t) \
  ((t) (! TYPE_SIGNED (t) \
? (t) 0 \
: ~ TYPE_MAXIMUM (t)))
#define TYPE_MAXIMUM(t) \
  ((t) (! TYPE_SIGNED (t) \
? (t) -1 \
: ((((t) 1 << (sizeof (t) * CHAR_BIT - 2)) - 1) * 2 + 1)))      STRENGTH REDUCTION?

#ifndef TIME_T_MIN
# define TIME_T_MIN TYPE_MINIMUM (time_t)
#endif
#ifndef TIME_T_MAX
# define TIME_T_MAX TYPE_MAXIMUM (time_t)
#endif
#define TIME_T_MIDPOINT (SHR (TIME_T_MIN + TIME_T_MAX, 1) + 1)

verify (time_t_is_integer, TYPE_IS_INTEGER (time_t));
verify (twos_complement_arithmetic,
(TYPE_TWOS_COMPLEMENT (int)
&& TYPE_TWOS_COMPLEMENT (long_int)
&& TYPE_TWOS_COMPLEMENT (time_t)));

#define EPOCH_YEAR 1970
#define TM_YEAR_BASE 1900
verify (base_year_is_a_multiple_of_100, TM_YEAR_BASE % 100 == 0);

/* Return 1 if YEAR + TM_YEAR_BASE is a leap year.  */
static int
leapyear (long_int year)
{
  /* Don't add YEAR to TM_YEAR_BASE, as that might overflow.
     Also, work even if YEAR is negative.  */
  return
    ((year & 3) == 0
     && (year % 100 != 0
|| ((year / 100) & 3) == (- (TM_YEAR_BASE / 100) & 3)));
}

/* How many days come before each month (0-12).  */
#ifndef _LIBC
static
#endif
const unsigned short int __mon_yday[2][13] =                         COMBINE INTO SINGLE ARRAY?
  {
    /* Normal years.  */
    { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
    /* Leap years.  */
    { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
  };


#ifndef _LIBC
/* Portable standalone applications should supply a <time.h> that
   declares a POSIX-compliant localtime_r, for the benefit of older
   implementations that lack localtime_r or have a nonstandard one.
   See the gnulib time_r module for one way to implement this.  */
# undef __localtime_r
# define __localtime_r localtime_r
# define __mktime_internal mktime_internal
# include "mktime-internal.h"
#endif

/* Return 1 if the values A and B differ according to the rules for
   tm_isdst: A and B differ if one is zero and the other positive.  */
static int
isdst_differ (int a, int b)
{
  return (!a != !b) && (0 <= a) && (0 <= b);
}

/* Return an integer value measuring (YEAR1-YDAY1 HOUR1:MIN1:SEC1) -
   (YEAR0-YDAY0 HOUR0:MIN0:SEC0) in seconds, assuming that the clocks
   were not adjusted between the time stamps.

   The YEAR values uses the same numbering as TP->tm_year.  Values
   need not be in the usual range.  However, YEAR1 must not be less
   than 2 * INT_MIN or greater than 2 * INT_MAX.

   The result may overflow.  It is the caller's responsibility to
   detect overflow.  */

static time_t
ydhms_diff (long_int year1, long_int yday1, int hour1, int min1, int sec1,
   int year0, int yday0, int hour0, int min0, int sec0)
{
  verify (C99_integer_division, -1 / 2 == 0);                   PRECALCULATE TO -0.5 ?

  /* Compute intervening leap days correctly even if year is negative.
     Take care to avoid integer overflow here.  */
  int a4 = SHR (year1, 2) + SHR (TM_YEAR_BASE, 2) - ! (year1 & 3);      REPETITIOUS,
  int b4 = SHR (year0, 2) + SHR (TM_YEAR_BASE, 2) - ! (year0 & 3);      USE CONSTANTS?
  int a100 = a4 / 25 - (a4 % 25 < 0);
  int b100 = b4 / 25 - (b4 % 25 < 0);                     STRENGTH REDUCTIONS?
  int a400 = SHR (a100, 2);
  int b400 = SHR (b100, 2);
  int intervening_leap_days = (a4 - b4) - (a100 - b100) + (a400 - b400);

  /* Compute the desired time in time_t precision.  Overflow might
     occur here.  */
  time_t tyear1 = year1;
  time_t years = tyear1 - year0;
  time_t days = 365 * years + yday1 - yday0 + intervening_leap_days;
  time_t hours = 24 * days + hour1 - hour0;
  time_t minutes = 60 * hours + min1 - min0;
  time_t seconds = 60 * minutes + sec1 - sec0;
  return seconds;
}

/* Return the average of A and B, even if A + B would overflow.  */
static time_t
time_t_avg (time_t a, time_t b)
{
  return SHR (a, 1) + SHR (b, 1) + (a & b & 1);
}

/* Return 1 if A + B does not overflow.  If time_t is unsigned and if
   B's top bit is set, assume that the sum represents A - -B, and
   return 1 if the subtraction does not wrap around.  */
static int
time_t_add_ok (time_t a, time_t b)
{
  if (! TYPE_SIGNED (time_t))
    {
      time_t sum = a + b;
      return (sum < a) == (TIME_T_MIDPOINT <= b);
    }
  else if (WRAPV)
    {
      time_t sum = a + b;
      return (sum < a) == (b < 0);
    }
  else
    {
      time_t avg = time_t_avg (a, b);                                                                 INLINING?
      return TIME_T_MIN / 2 <= avg && avg <= TIME_T_MAX / 2;           MAX * 0.5 ?
    }
}

/* Return 1 if A + B does not overflow.  */
static int
time_t_int_add_ok (time_t a, int b)
{
  verify (int_no_wider_than_time_t, INT_MAX <= TIME_T_MAX);
  if (WRAPV)
    {
      time_t sum = a + b;
      return (sum < a) == (b < 0);
    }
  else
    {
      int a_odd = a & 1;
      time_t avg = SHR (a, 1) + (SHR (b, 1) + (a_odd & b));
      return TIME_T_MIN / 2 <= avg && avg <= TIME_T_MAX / 2;
    }
}

/* Return a time_t value corresponding to (YEAR-YDAY HOUR:MIN:SEC),
   assuming that *T corresponds to *TP and that no clock adjustments
   occurred between *TP and the desired time.
   If TP is null, return a value not equal to *T; this avoids false matches.
   If overflow occurs, yield the minimal or maximal value, except do not
   yield a value equal to *T.  */
static time_t
guess_time_tm (long_int year, long_int yday, int hour, int min, int sec,
      const time_t *t, const struct tm *tp)
{
  if (tp)
    {
      time_t d = ydhms_diff (year, yday, hour, min, sec,
    tp->tm_year, tp->tm_yday,
    tp->tm_hour, tp->tm_min, tp->tm_sec);
      if (time_t_add_ok (*t, d))
return *t + d;
    }

  /* Overflow occurred one way or another.  Return the nearest result
     that is actually in range, except don't report a zero difference
     if the actual difference is nonzero, as that would cause a false
     match; and don't oscillate between two values, as that would
     confuse the spring-forward gap detector.  */
  return (*t < TIME_T_MIDPOINT
 ? (*t <= TIME_T_MIN + 1 ? *t + 1 : TIME_T_MIN)
 : (TIME_T_MAX - 1 <= *t ? *t - 1 : TIME_T_MAX));
}

/* Use CONVERT to convert *T to a broken down time in *TP.
   If *T is out of range for conversion, adjust it so that
   it is the nearest in-range value and then convert that.  */
static struct tm *
ranged_convert (struct tm *(*convert) (const time_t *, struct tm *),
time_t *t, struct tm *tp)
{
  struct tm *r = convert (t, tp);                                                      INLINING?

  if (!r && *t)                                                                      SHORT-CIRCUIT EVALUATION(&&)
    {
      time_t bad = *t;
      time_t ok = 0;

      /* BAD is a known unconvertible time_t, and OK is a known good one.
Use binary search to narrow the range between BAD and OK until
they differ by 1.  */
      while (bad != ok + (bad < 0 ? -1 : 1))
{
 time_t mid = *t = time_t_avg (ok, bad);
 r = convert (t, tp);
 if (r)
   ok = mid;
 else
   bad = mid;
}

      if (!r && ok)
{
 /* The last conversion attempt failed;
    revert to the most recent successful attempt.  */
 *t = ok;
 r = convert (t, tp);
}
    }

  return r;
}


/* Convert *TP to a time_t value, inverting
   the monotonic and mostly-unit-linear conversion function CONVERT.
   Use *OFFSET to keep track of a guess at the offset of the result,
   compared to what the result would be for UTC without leap seconds.
   If *OFFSET's guess is correct, only one CONVERT call is needed.
   This function is external because it is used also by timegm.c.  */
time_t
__mktime_internal (struct tm *tp,
  struct tm *(*convert) (const time_t *, struct tm *),
  time_t *offset)
{
  time_t t, gt, t0, t1, t2;
  struct tm tm;

  /* The maximum number of probes (calls to CONVERT) should be enough
     to handle any combinations of time zone rule changes, solar time,
     leap seconds, and oscillations around a spring-forward gap.
     POSIX.1 prohibits leap seconds, but some hosts have them anyway.  */
  int remaining_probes = 6;

  /* Time requested.  Copy it in case CONVERT modifies *TP; this can
     occur if TP is localtime's returned value and CONVERT is localtime.  */
  int sec = tp->tm_sec;
  int min = tp->tm_min;
  int hour = tp->tm_hour;
  int mday = tp->tm_mday;
  int mon = tp->tm_mon;
  int year_requested = tp->tm_year;
  int isdst = tp->tm_isdst;

  /* 1 if the previous probe was DST.  */
  int dst2;

  /* Ensure that mon is in range, and set year accordingly.  */
  int mon_remainder = mon % 12;
  int negative_mon_remainder = mon_remainder < 0;
  int mon_years = mon / 12 - negative_mon_remainder;
  long_int lyear_requested = year_requested;
  long_int year = lyear_requested + mon_years;

  /* The other values need not be in range:
     the remaining code handles minor overflows correctly,
     assuming int and time_t arithmetic wraps around.
     Major overflows are caught at the end.  */

  /* Calculate day of year from year, month, and day of month.
     The result need not be in range.  */
  int mon_yday = ((__mon_yday[leapyear (year)]
  [mon_remainder + 12 * negative_mon_remainder])
 - 1);
  long_int lmday = mday;
  long_int yday = mon_yday + lmday;

  time_t guessed_offset = *offset;

  int sec_requested = sec;

  if (LEAP_SECONDS_POSSIBLE)
    {
      /* Handle out-of-range seconds specially,
since ydhms_tm_diff assumes every minute has 60 seconds.  */
      if (sec < 0)
sec = 0;
      if (59 < sec)
sec = 59;
    }

  /* Invert CONVERT by probing.  First assume the same offset as last
     time.  */

  t0 = ydhms_diff (year, yday, hour, min, sec,
  EPOCH_YEAR - TM_YEAR_BASE, 0, 0, 0, - guessed_offset);

  if (TIME_T_MAX / INT_MAX / 366 / 24 / 60 / 60 < 3)       TOO MANY DIVISIONS, SIMPLIFY?
    {
      /* time_t isn't large enough to rule out overflows, so check
for major overflows.  A gross check suffices, since if t0
has overflowed, it is off by a multiple of TIME_T_MAX -
TIME_T_MIN + 1.  So ignore any component of the difference
that is bounded by a small value.  */

      /* Approximate log base 2 of the number of time units per
biennium.  A biennium is 2 years; use this unit instead of
years to avoid integer overflow.  For example, 2 average
Gregorian years are 2 * 365.2425 * 24 * 60 * 60 seconds,
which is 63113904 seconds, and rint (log2 (63113904)) is
26.  */
      int ALOG2_SECONDS_PER_BIENNIUM = 26;
      int ALOG2_MINUTES_PER_BIENNIUM = 20;
      int ALOG2_HOURS_PER_BIENNIUM = 14;
      int ALOG2_DAYS_PER_BIENNIUM = 10;
      int LOG2_YEARS_PER_BIENNIUM = 1;

      int approx_requested_biennia =
(SHR (year_requested, LOG2_YEARS_PER_BIENNIUM)
- SHR (EPOCH_YEAR - TM_YEAR_BASE, LOG2_YEARS_PER_BIENNIUM)
+ SHR (mday, ALOG2_DAYS_PER_BIENNIUM)
+ SHR (hour, ALOG2_HOURS_PER_BIENNIUM)
+ SHR (min, ALOG2_MINUTES_PER_BIENNIUM)
+ (LEAP_SECONDS_POSSIBLE
   ? 0
   : SHR (sec, ALOG2_SECONDS_PER_BIENNIUM)));

      int approx_biennia = SHR (t0, ALOG2_SECONDS_PER_BIENNIUM);
      int diff = approx_biennia - approx_requested_biennia;
      int approx_abs_diff = diff < 0 ? -1 - diff : diff;

      /* IRIX 4.0.5 cc miscalculates TIME_T_MIN / 3: it erroneously
gives a positive value of 715827882.  Setting a variable
first then doing math on it seems to work.
(ghazi@caip.rutgers.edu) */
      time_t time_t_max = TIME_T_MAX;
      time_t time_t_min = TIME_T_MIN;
      time_t overflow_threshold =
(time_t_max / 3 - time_t_min / 3) >> ALOG2_SECONDS_PER_BIENNIUM;
WHY NOT (MAX - MIN) / 3 ?? ... DIVIDING TWICE IS REDUNDANT
      if (overflow_threshold < approx_abs_diff)
{
 /* Overflow occurred.  Try repairing it; this might work if
    the time zone offset is enough to undo the overflow.  */
 time_t repaired_t0 = -1 - t0;
 approx_biennia = SHR (repaired_t0, ALOG2_SECONDS_PER_BIENNIUM);
 diff = approx_biennia - approx_requested_biennia;
 approx_abs_diff = diff < 0 ? -1 - diff : diff;
 if (overflow_threshold < approx_abs_diff)
   return -1;
 guessed_offset += repaired_t0 - t0;
 t0 = repaired_t0;
}
    }

  /* Repeatedly use the error to improve the guess.  */

  for (t = t1 = t2 = t0, dst2 = 0;
       (gt = guess_time_tm (year, yday, hour, min, sec, &t,
   ranged_convert (convert, &t, &tm)),
t != gt);
       t1 = t2, t2 = t, t = gt, dst2 = tm.tm_isdst != 0)
    if (t == t1 && t != t2
&& (tm.tm_isdst < 0                       SHORT-CIRCUIT EVALUATIONS
   || (isdst < 0
? dst2 <= (tm.tm_isdst != 0)
: (isdst != 0) != (tm.tm_isdst != 0))))
      /* We can't possibly find a match, as we are oscillating
between two values.  The requested time probably falls
within a spring-forward gap of size GT - T.  Follow the common
practice in this case, which is to return a time that is GT - T
away from the requested time, preferring a time whose
tm_isdst differs from the requested value.  (If no tm_isdst
was requested and only one of the two values has a nonzero
tm_isdst, prefer that value.)  In practice, this is more
useful than returning -1.  */
      goto offset_found;
    else if (--remaining_probes == 0)
      return -1;

  /* We have a match.  Check whether tm.tm_isdst has the requested
     value, if any.  */
  if (isdst_differ (isdst, tm.tm_isdst))
    {
      /* tm.tm_isdst has the wrong value.  Look for a neighboring
time with the right value, and use its UTC offset.

Heuristic: probe the adjacent timestamps in both directions,
looking for the desired isdst.  This should work for all real
time zone histories in the tz database.  */

      /* Distance between probes when looking for a DST boundary.  In
tzdata2003a, the shortest period of DST is 601200 seconds
(e.g., America/Recife starting 2000-10-08 01:00), and the
shortest period of non-DST surrounded by DST is 694800
seconds (Africa/Tunis starting 1943-04-17 01:00).  Use the
minimum of these two values, so we don't miss these short
periods when probing.  */
      int stride = 601200;

      /* The longest period of DST in tzdata2003a is 536454000 seconds
(e.g., America/Jujuy starting 1946-10-01 01:00).  The longest
period of non-DST is much longer, but it makes no real sense
to search for more than a year of non-DST, so use the DST
max.  */
      int duration_max = 536454000;

      /* Search in both directions, so the maximum distance is half
the duration; add the stride to avoid off-by-1 problems.  */
      int delta_bound = duration_max / 2 + stride;      ALREADY HAVE CONSTANT! USE IT

      int delta, direction;

      for (delta = stride; delta < delta_bound; delta += stride)
for (direction = -1; direction <= 1; direction += 2)
 if (time_t_int_add_ok (t, delta * direction))
   {
     time_t ot = t + delta * direction;           DIRECTION IS ONLY EVER -1 AND 1 ...
     struct tm otm;
     ranged_convert (convert, &ot, &otm);
     if (! isdst_differ (isdst, otm.tm_isdst))
{
 /* We found the desired tm_isdst.
    Extrapolate back to the desired time.  */
 t = guess_time_tm (year, yday, hour, min, sec, &ot, &otm);
 ranged_convert (convert, &t, &tm);
 goto offset_found;
}
   }
    }

 offset_found:
  *offset = guessed_offset + t - t0;

  if (LEAP_SECONDS_POSSIBLE && sec_requested != tm.tm_sec)
    {
      /* Adjust time to reflect the tm_sec requested, not the normalized value.
Also, repair any damage from a false match due to a leap second.  */
      int sec_adjustment = (sec == 0 && tm.tm_sec == 60) - sec;
      if (! time_t_int_add_ok (t, sec_requested))
return -1;
      t1 = t + sec_requested;
      if (! time_t_int_add_ok (t1, sec_adjustment))       REPETITIVE-LOOKING... SIMPLIFY??
return -1;
      t2 = t1 + sec_adjustment;
      if (! convert (&t2, &tm))
return -1;
      t = t2;
    }

  *tp = tm;
  return t;
}


/* FIXME: This should use a signed type wide enough to hold any UTC
   offset in seconds.  'int' should be good enough for GNU code.  We
   can't fix this unilaterally though, as other modules invoke
   __mktime_internal.  */
static time_t localtime_offset;

/* Convert *TP to a time_t value.  */
time_t
mktime (struct tm *tp)
{
#ifdef _LIBC
  /* POSIX.1 8.1.1 requires that whenever mktime() is called, the
     time zone names contained in the external variable 'tzname' shall
     be set as if the tzset() function had been called.  */
  __tzset ();
#endif

  return __mktime_internal (tp, __localtime_r, &localtime_offset);
}

#ifdef weak_alias
weak_alias (mktime, timelocal)
#endif

#ifdef _LIBC
libc_hidden_def (mktime)
libc_hidden_weak (timelocal)
#endif

#if defined DEBUG_MKTIME && DEBUG_MKTIME

static int
not_equal_tm (const struct tm *a, const struct tm *b)
{
  return ((a->tm_sec ^ b->tm_sec)
 | (a->tm_min ^ b->tm_min)
 | (a->tm_hour ^ b->tm_hour)
 | (a->tm_mday ^ b->tm_mday)
 | (a->tm_mon ^ b->tm_mon)
 | (a->tm_year ^ b->tm_year)
 | (a->tm_yday ^ b->tm_yday)
 | isdst_differ (a->tm_isdst, b->tm_isdst));
}

static void
print_tm (const struct tm *tp)
{
  if (tp)
    printf ("%04d-%02d-%02d %02d:%02d:%02d yday %03d wday %d isdst %d",
   tp->tm_year + TM_YEAR_BASE, tp->tm_mon + 1, tp->tm_mday,
   tp->tm_hour, tp->tm_min, tp->tm_sec,
   tp->tm_yday, tp->tm_wday, tp->tm_isdst);
  else
    printf ("0");
}

static int
check_result (time_t tk, struct tm tmk, time_t tl, const struct tm *lt)
{
  if (tk != tl || !lt || not_equal_tm (&tmk, lt))                   SHORT-CIRCUIT EVALUATIONS ( || )
    {
      printf ("mktime (");
      print_tm (lt);
      printf (")\nyields (");
      print_tm (&tmk);
      printf (") == %ld, should be %ld\n", (long int) tk, (long int) tl);
      return 1;
    }

  return 0;
}

int
main (int argc, char **argv)
{
  int status = 0;
  struct tm tm, tmk, tml;
  struct tm *lt;
  time_t tk, tl, tl1;
  char trailer;

  if ((argc == 3 || argc == 4)                            (ARGC > 2 && ARGC < 5) POSSIBLY CHEAPER?
      && (sscanf (argv[1], "%d-%d-%d%c",
 &tm.tm_year, &tm.tm_mon, &tm.tm_mday, &trailer)
 == 3)
      && (sscanf (argv[2], "%d:%d:%d%c",
 &tm.tm_hour, &tm.tm_min, &tm.tm_sec, &trailer)
 == 3))
    {
      tm.tm_year -= TM_YEAR_BASE;
      tm.tm_mon--;
      tm.tm_isdst = argc == 3 ? -1 : atoi (argv[3]);
      tmk = tm;
      tl = mktime (&tmk);
      lt = localtime (&tl);
      if (lt)
{
 tml = *lt;
 lt = &tml;
}
      printf ("mktime returns %ld == ", (long int) tl);
      print_tm (&tmk);
      printf ("\n");
      status = check_result (tl, tmk, tl, lt);
    }
  else if (argc == 4 || (argc == 5 && strcmp (argv[4], "-") == 0))    SHORT-CIRCUIT EVALUATION
    {
      time_t from = atol (argv[1]);
      time_t by = atol (argv[2]);
      time_t to = atol (argv[3]);

      if (argc == 4)
for (tl = from; by < 0 ? to <= tl : tl <= to; tl = tl1)
 {
   lt = localtime (&tl);                                              INLINING??
   if (lt)
     {
tmk = tml = *lt;
tk = mktime (&tmk);
status |= check_result (tk, tmk, tl, &tml);
     }
   else
     {
printf ("localtime (%ld) yields 0\n", (long int) tl);
status = 1;
     }
   tl1 = tl + by;
   if ((tl1 < tl) != (by < 0))
     break;
 }
      else
for (tl = from; by < 0 ? to <= tl : tl <= to; tl = tl1)
 {
   /* Null benchmark.  */
   lt = localtime (&tl);
   if (lt)
     {
tmk = tml = *lt;
tk = tl;
status |= check_result (tk, tmk, tl, &tml);
     }
   else
     {
printf ("localtime (%ld) yields 0\n", (long int) tl);
status = 1;
     }
   tl1 = tl + by;
   if ((tl1 < tl) != (by < 0))
     break;
 }
    }
  else
    printf ("Usage:\
\t%s YYYY-MM-DD HH:MM:SS [ISDST] # Test given time.\n\
\t%s FROM BY TO # Test values FROM, FROM+BY, ..., TO.\n\
\t%s FROM BY TO - # Do not test those values (for benchmark).\n",
   argv[0], argv[0], argv[0]);

  return status;
}

#endif /* DEBUG_MKTIME */

/*
Local Variables:
compile-command: "gcc -DDEBUG_MKTIME -I. -Wall -W -O2 -g mktime.c -o mktime"
End:
*/
-----------------------------------------

So here we have my initial take on some possible things I can look into.. more detailed information on changes and testing will follow!! Stay tuned.

During the mini-presentation, the professor mentioned that I need to consider if the compiler already is doing many of those optimizations.

https://msdn.microsoft.com/en-us/library/ms973852.aspx
https://www.functions-online.com/mktime.html

by Siloaman (noreply@blogger.com) at March 10, 2017 07:52 PM


Nagashashank

Release 0.2

Recap on issues I worked/working on:
Issues #1779 Choosing “Upload” twice and canceling them causes Brackets to hang/unusable.
Issue #1790 Hover effect missing for Upload dialog and color consistency.

Both these bugs were filed by me, and they are related to the Upload file dialog prompt.

For the first bug, I thought by simply solving the issue by checking for the HTML class `upload-files-input-elem` (UI state) by using jQuery. I was able to solve it that way first, however I didn’t notice that this fix was still creating new instances of the Upload Dialog prompt. It was just not displaying/creating the prompt more that one time.

So I opened a Pull request with the first fix, then @humphd saw the problem with this issue and recommended an alternative solution.
I understood the solution and changed the pull request to adhere to the new solution.

It was at this time something happen to my thimble after updating it. In the upload dialog prompt I was no longer able to see the text.

I tried so many things to solve this issue I was having.

  1. vagrant reload, vagrant up --provision
  2. Reinstalling
    1. rm -rf thimble.*  and rm -rf brackets
    2. Followed `http://blog.humphd.org/fixing-a-bug-in-mozilla-thimble/` to resetup.
  3. Reforking
    1. Removed my fork of thimble and brackets
    2. Reforked
    3. (2.2)
  4. Tried to setup Mozilla’s repo instead of my fork
  5. Installed bootcamp windows, and tried to setup brand new

Here is a youtube video of trying to fresh install. https://youtu.be/upkfMYJLOxs 

None of these worked for some reason. So I opened a new Issue #1819.
Provided some research I did in that issue.
Screen Shot 2017-03-14 at 10.49.58 PM.png

@flukeout suggested me to run some commands to see if it fixes this issue.
Screen Shot 2017-03-14 at 10.55.07 PM.png

Still had no luck of it working.

I thought it was only my Macbook that was having the problem. However few of my friends also had the same issue after updating their local repo with master upstream.
Having to deal with this issue I was not able to work on Issue #1790 as I can’t see the text elements.

Getting back to 1st bug for release 0.2.
I was able to test it quickly.ezgif-2-610656eacf.gif
It seemed to fix the issue. Later when @gideonthomas was reviewing the pull request, he suggested few small changes and one question. He asked if we should return _uploadDialog.deferred.promise(); instead  of return _uploadDialog;  and @humphd agreed with @gideonthomas. So I made the change and updated the pull request.
I tested again to make sure it works fine, so then I noticed this will bring up a new bug.
If we cancel the dialog, we cannot open the UploadDialog prompt anymore (unless refresh).ezgif-2-a8117655deI’m currently trying see what causes this issue, so I changed the pull request title and added [WIP] (work in progress).

 

 


by npolugari at March 10, 2017 12:02 PM

March 09, 2017


Matt Welke

Studying for the Exam

I may not be in class this semester, but it sure does feel like I’m studying for an exam.

That’s what doing the documentation feels like. It reminds me of reviewing the little details you learn over the semester to make sure you understand them and how they relate to the other details. I thought doing documentation would be a matter of writing a few blurbs explaining what we made and how it works. But most of the time has been consulting the source code and my team mate to remind myself how a feature works. And I finish the documentation section with a much better understanding of what we created. It’s been a long time since September, and now I can view the design decisions we made in a new light.

Some examples:

In our JavaScript code, we chose to store Dates as Numbers. We used the number of milliseconds since the UNIX epoch. Why? Because it was accurate and explicit. There’s no way to misinterpret that. We felt that storing them as strings might be a bad idea because there’s no standard. Turns out the ORM we use (Mongoose) ended up storing them as the Double type in MongoDB, which is kind of weird for a whole number. And I’ve since learned that there is indeed a standard for storing a date as a string, the ISO8601 date, and it’s well-supported by mainstream programming languages. And it turns out using strings for dates is actually more developer friendly, since we could then look inside our MongoDB collections and instantly know what time something happened. Oops.

In MongoDB, we chose to convert some of the primary keys for our collections from ObjectIds to Strings (being the string representation of the ObjectId). To be honest, I don’t even remember the exact reason for this, except that it had to do with compatibility. ObjectIds are a BSON type, so there’s no representation of them in JavaScript without using a library to do it. This happens for all of the BSON types in MongoDB that lack JSON equivalents. In retrospect, I would have either used just Strings consistently from the beginning, or put in the effort to consistently use BSON types (by using whatever libraries necessary to do whatever conversions necessary) from the beginning. Right now, we have a combination of Strings for some collections and ObjectIds for others, and when you factor in that the primary keys often act as references due to the somewhat relational nature of our data, this causes me some mental overhead I’d rather not have. Oops.

The problems I’ve described aren’t major issues. They don’t impact the functionality of our creation, they just break standards (which, in our defense, we weren’t familiar with when we started) and cause some issues with maintainability. Luckily, this is CDOT. It’s open source. So the world can clean it up for us. (Thank you, world.) Or maybe we’ll even have some time for some cleanup before the end of the project.

One thing is for sure – I’ve learned a ton working on this project and I have no doubt that the code I write for web apps going forward is going to look a lot better than the code I produced in class before starting this project.


by Matt at March 09, 2017 10:17 PM


Nagashashank

Lab 6: Editor tryouts

For this lab, our task was to choose 2 Editors which we never used, and play around with. I decided to go with IntelliJ IDEA, Komodo IDE. 

IntelliJ IDEA was made by JetBrains and it is available as an Apache 2 Licensed community edition. It was first released in January 2001, and it was one of the first IDE with advanced code navigation and code refactoring capabilities. 

Komodo IDE was made by ActiveState, a Canadian software company headquartered in Vancouver. It was first released May 2000, and it uses the Mozilla and Scintilla code base.

Start Screen and Opening File:

Screen Shot 2017-03-09 at 7.21.06 PM 
This is the start screen upon launching IntelliJ IDEA application. It provides quick options for creating new projects, import projects, and open files. 

Screen Shot 2017-03-09 at 7.00.16 PM.png
This is the start screen upon launching Komodo application.
Screen Shot 2017-03-09 at 7.04.13 PM.png Komodo has more options for quick start. 

Changing indent from tabs to spaces.
Tabs Gif.gif
In IntelliJ, it allows us to change the tabs and spaces related to languages. 

Tabs gif.gifIn Komodo, allows us to change default spaces and also for per languages. 

Code Competion:
This is to toggle code completion in IntelliJ IDEA.
code complete gif.gif

This is how to toggle code competion in Komodo, and allows us to add more languages.
autocomplete gif.gif

Addons:

IntelliJ, there are tons of plugins available. it has a plugins manager to download and install them, and also had a website to download the plugins.
plugins gif.gif

Komodo, also has an addon manager also. But for some reason it didn’t work.
Screen Shot 2017-03-12 at 7.29.52 PM.png
However there is still a way to install addons. Via their website.Screen Shot 2017-03-12 at 7.20.03 PM.png.ignore
Some features it provides:

  1. Files syntax highlight
  2. Gitignore templates filtering and selecting in rules generator by name and content
  3. Generate Gitignore rules basing on GitHub’s templates collection
  4. Add selected file/directory to ignore rules from popup menu
  5. Suggesting .gitignore file creation for new project

screenshot_14958.pngscreenshot_14959.png

komodo-quickdiff
Adds inline diff indicators, placed in left margin
example.png

RGBA ColorPicker
This plugin allows us to pick color codes visually.

screenshot01.png

Focus Mode
Allows us to focus on coding and it hides the rest.screenshot.png

Overall, I would choose IntelliJ mainly because it’s free and had a big supporting community.


by npolugari at March 09, 2017 08:59 PM


Badr Modoukh

DPS909 Lab 6 – Picking and Learning a Good Editor

The two editors I wanted to explore with are Atom and Brackets. I never used these editors before and wanted to explore what they offer and see which one I liked more than the other.

After experimenting with these two editors I decided to use Atom which can be downloaded from https://atom.io/.

I liked using Atom more than Brackets because I felt more comfortable with its layout and how the code’s font and size is displayed more clearly. Also I found it easier to customize and find different packages to install. Even though I chose Atom over Brackets I found Brackets to be not that bad and recommend you to try it out.

Here’s how Atom looks with the entire Mozilla Brackets project opened in it:

Screen Shot 2017-03-08 at 6.49.35 PM.png

And this is how Brackets looks:

Screen Shot 2017-03-08 at 6.51.00 PM.png

I will be demonstrating how to do 5 simple tasks with the Atom editor. These tasks are:

  • How to open an entire project
  • How to open the editor from the command line
  • How to split the screen into multiple panes
  • How to install editor extensions (packages)
  • How to change the theme of the editor

How to open an entire project:

Like most editors opening an entire project is done in a similar way. When you first open Atom it displays a Welcome Guide with an option to open a project. This is done like this:

openproject.gif

You can also open a project by going to File -> Open… -> and selecting the project you want to open.

How to open the editor from the command line:

Before you can open Atom in the command line you need to install the shell commands. This is done by going to Atom > Install Shell Commands. Then you can open Atom from the command line by simply typing “atom” in the terminal. This is done like this:

command.gif

You can also type “atom <file or project path>” which will open a file or project in Atom.

How to split the screen into multiple panes:

Splitting the screen into multiple panes can be done like this:

split.gif

How to install editor extensions (packages):

Installing extensions/packages in Atom is very simple. You goto Atom -> Preferences. This will open a Settings window in the editor. Than you goto Install and type in the package you want to install. This is done like this:

exten.gif

The install section also displays featured packages that you can install.

How to change the theme of the editor:

Many editors enable you to change the theme of the editor. To do this in Atom you goto Atom -> Preferences -> Themes. You can select from the pre-loaded themes. This is done like this:

change theme.gif

You can also choose to style Atom by editing the stylesheet.

My 5 favourite extensions(packages) for Atom:

Atom offers many extensions(packages) that you can install. The 5 most packages I found to be my favourite and most useful are:

linter-jsonlint:

This package I found to be very useful for linting JSON files. It saves you a lot of time when you are trying to find where the syntax error is in a JSON file. There are also other packages available for linting other file types like HTML, CSS, PHP, JavaScript, etc. Here is a demonstration of how this package works:

lint.gif

atom-beautify:

I found this package to be really useful for formatting code. Instead of going through every line to format the code properly, you can use this package to do it for you. It saves a lot of time. Here is a demonstration of this package:

beautify.gif

color-picker:

This package is quite interesting and useful. It shows you a preview of the color you want to pick in the CSS file. Instead of browsing through the different colors available, you can use this package which shows all the colors to you in the editor. Here is a demonstration of this package working:

colorpicker.gif

minimap:

I found this package to be really useful when working on files. It allows you to scroll through a file that contains a lot of lines of code faster. This package saves you a lot of time when scrolling through a file with 1000s of lines of code. The Sublime Text editor has this feature and it was great finding a package for Atom that offers this feature. Here is a demonstration of this package:

minimap.gif

git-plus:

The git-plus package is also one of my favourite in Atom. It enables you to work with Git without leaving the editor. I found this to be quite useful. Inside of the editor you can add, commit, push, pull, and do other git commands.


by badrmodoukh at March 09, 2017 02:41 AM

March 08, 2017


Jerry Goguette

Mozilla Thimble Release 0.2 WIP

For this release, I’ve made some amazing progress.

863aa41c-02ac-11e7-9a8b-184b793d3d7a

As you can see in the gif above there is a slight freeze in the UI after inserting a snippet. For the time being, I’m storing all the snippets in snippets.js. On initial load up of Thimble, I’ve set the default file type to be HTML because there is no way to determine what file I’m on at that point in time.
Afterward, whenever the file is changed Thimble receives an activeEditorChanged event which also comes with the full file name. With access to that information, I’m then able to identify the current file type and repopulate the li elements of the snippets menu. However, when thimble starts i’ve defaulted the file type to HTMLbecause there is no way at the moment to determine the type of file given. I’ll have to most likely file a new bug to have bracketssend an activeEditorChanged event so the initial project file type can be determined.

At the moment I’m waiting for feedback from people in the thimble community to help out with the optimization and design.  Luke Pacholski says he’ll offer me some guidance on the design aspect of the menu. Can’t wait!

Here’s a link to my pull request! Keep a close eye as the days go by.

Hopefully, I’ll be able to proceed further.=)


by jgoguette at March 08, 2017 07:26 AM


Max Fainshtein

Assignment 2

For this assignment I have selected issue #451
contextmenu
This issue consists of including copy, cut, and paste into the context menu for the editor. As well as include a check to see if the browser can support said commands, if not it should post a dialog.

During this adjustment I used https://github.com/adobe/brackets/pull/12674/files for reference as they have already implemented this modification. I started working on the dialog menu which ended up looking like this 

dialog

the scripts to make this dialog open and quit seemed a bit off, but I had the initial hopes of changing it once I got it working. Once I completed it and added functionality to test if the commands are supported using document.queryCommandSupported(), I did some research on how to include my html file into the EditorCommandHandlers.js file and figured out that this method wouldn’t work. Once I realized I hit a dead end I started on a new path which was creating a modal where html code is appended into a var and calling modal to display it onto the current page. I haven’t gotten to give this a chance yet though. I am planning on trying it tomorrow.

I also decided to check to see if everything was working as I held it back for a while. I noticed that paste was no longer working since I made changes playing around with things. This is also on my to do list. Checking all the work I have done I was unable to solve the issue. I am planning on repulling and attempting to make the changes again to solve this issue.

I will update this blog once I finish up with my remaining work..


by mfainshtein4 at March 08, 2017 05:10 AM


John James

OSD Lab6 (New Editor)

Getting Started:

So the first part of this lab was to pick an editor I have never used before. We were given a list of options or we could have done another one of our choosing. So I deiced to use Atom because it looked like a very good editor that has plenty of plugins and extensions to go by.

 

Things I liked:

Some of the things I really liked about atom, was being able to show invisible spaces, to tell if there are tabs or spaces in certain areas that could mess styling. Also being able to open a project on the sidebar really helped me with being projects.

Here are some cool features I really liked about atom:

 

Custom Keybindings:

CustomKeybindings.gif

Mini Map navigation:

MiniMapNavigation.gif

Searching Packages:

NewPackages.gif

Project Navigation:

Project.gif

Split Screen :

SplitScreen.gif

Git Connection:

ConnectiontoGit.gif

 

AutoComplete:

AutoComplete.gif

Why Atom?

I really enjoyed how customizable Atom was and all the cool things it allowed me to do.

Where to get Atom?

To download atom is simple! to go here! Atom

and for suggested Packages and themes? Here is my list

  • Pigments
  • File Icons – Gives file types appropriate icons, for easier navigation
  • MiniMap – Gives map of the code, to scroll faster
  • autocomplete-atom-api – Gives suggest syntax for types of code
  • git-plus – allows using GitHub without a terminal

Other things that I have done for preference:

  • Theme: One Dark
  • Font: Fira Mono
  • Markdown Preview

 


by johnjamesa70 at March 08, 2017 05:07 AM


Oleg Mytryniuk

Atom vs Nuclide

I have decided to work on Atom and Nuclide editors.
Among all editors available in the list, I have worked with most of them except Atom and Nuclide. Simultaneously, I have heard a lot about Atom from my friends and I would like to test it out.

Let’s start to test the first editor in our list – Atom.

ATOM

1. Web-page. The web page is well designed and it is easy to find the download link for the editor.
2. Installing. Extremely easy – regular .exe file, just run and it will do everything for you.

A) How to open a file, a folder of files (e.g., an entire project)

Once you have opened the editor, you will see the main menu, where you are asked to choose whether you prefer to open a project, install plugin, so on. You can use this menu to open the project. You can also open the separate file, or again the whole project using top-menu.

File -> Open File
File -> Open Project
File -> Add Project Folder
File -> Reopen Last Project

atom

 

B) How to change your indent from tabs to spaces, 2-spaces, 4-spaces, etc?

File -> Settings -> Editor -> Tab Length -> Default: 2

We can easily change the number of default spaces.

atom1.gif

C) How to open the editor from the command line
To do that, first of all we are supposed to set PATH variable for the editor(for our convenience)) (like: C:\Users\Admin\AppData\Local\Atom\bin)
Afterwards,we can just call atom.cmd. Otherwise we are supposed type the whole path for atom.cmd to be able to run the software.

D) To find something in an open file, we can use Ctrl+F to call the search console.
To find something in any file in a whole project: Ctrl+Shifn+F
To find a file: Ctrl+P

E) How to split the screen into multiple panes/editors/views?
Just right click on the tab you would like to use in another pane and choose split direction.

split.gif

F)How to install editor extensions (a.k.a., plugins, packages, etc.)
You can find all of them in Settings(Ctrl + ,) or File -> Settings.

package

G) What are some common key bindings
Instead of describing all of them, I would advice you to open Keybinding Cheatsheet menu. You can do that by pressing ctrl-alt-/, or selecting the Packages > Keybinding Cheatsheet.

H)How to change keybindings.
If you would like to change keybindings, you are supposed to go to Settings and to choose Keybinding menu for that.

keybinding.gif

I)How to enable/use autocomplete for coding HTML, JS, CSS, etc
To use particular snippets, you are supposed to install a plugin. For example, autocomplete-plus or ternjs. We will test it in the next step.

NUCLIDE

Nuclide. It is a Facebook open-source project. The most important thing I would to mention that the editor is developed for MacOS and Linux OS, and it is NOT fully supported on Windows.
In our test, we will run it on Windows.

FYI, Nuclide is a code editor built on the backbone of GitHub’s Atom text editor we tested above and that’s why, like the previous editor, there is a familiar look and feel to Nuclide.
Installing Nuclide on Windows, we should install it … from Atom. The thing is that Nuclide is a package for Atom.

A) How to open a file, a folder of files (e.g., an entire project)

Basically, same as in Atom, you can open the whole project using top-menu (we are doing this from the Atom menu bar) and clicking on File.

File -> Open File
File -> Open Project
File -> Add Project Folder
File -> Reopen Last Project

You can also add a project by clicking the Add Project Folder button in the left side-pane, or using the Ctrl-Shift-O keyboard. Use Ctrl + Alt + O to open a file.

B) How to change your indent from tabs to spaces, 2-spaces, 4-spaces, etc?

Same as in Atom: File -> Settings -> Editor -> Tab Length -> Default: 2

However, Nuclide has a really cool feature “Coding standard”. This functionality helps you to format your code as you set it in your standards. To do that you just need to place your cursor inside the function  and press Ctrl-Shift-C to apply the coding standards to the function.

C) How to open the editor from the command line
As I have mentioned before, the editor was designed for Linux and Mac OS, that is why it supports the opening from the command line.
It is only … one command   atom 🙂

D) Find a file.
Most of the searching actions are the same as in Atom. For example, you can search within a file (i.e., Ctrl + F) or throughout your entire project(s) (Ctrl-Shift-F).
In addition to the basic Atom searching, Nuclide adds an additional powerful search functionality that allows you to search in various contexts. OmniSearch (Ctrl-T) provides a way to search, all at once, across your project, within your files, code symbols, etc.

E) How to split the screen into multiple panes/editors/views?
Just right click on the tab you would like to use in another pane and choose split direction.

F)How to install editor extensions (a.k.a., plugins, packages, etc.)
If you want to install packages, you  should go to Packages -> Settings View -> Install Packages/Themes.
In the Search packages text box, type the name and click “Install” afterwards

feature-toolbar-find-package

G) What are some common key bindings
Nuclide has already a bunch of built-in bindings that you can use for your projects. You can find them here https://nuclide.io/docs/editor/keyboard-shortcuts/
Here is for example, 3 of key bindings:

Ctrl-T    Use OmniSearch to open files, etc.
Ctrl-\    Toggle the Project Explorer.
Ctrl-0    Toggle between the Editing Area and the Project Explorer’s File Tree.

H)How to change keybindings.
However, you may want to add your own keybindings for Nuclide commands as well. To do that, you should edit ~/.atom/keymap.cson. This file is basically a json file and it is easy to edit.

I)How to enable/use autocomplete for coding HTML, JS, CSS, etc
Same as for Atom, you are supposed to install plugins to support autocompletion for particular languages.

CONCLUSION

After I worked with both editors, I have different thoughts about them.
I really like Atom and I was using it working on my release 0.2. On the other hand, Nuclide seems to be a smaller version of Atom and does not have as many features as Atom. In addition, I think it is a problem that the editor does not have complete functionality on Windows.

Winner: Atom.

PLUGINS

Now, let’s talk about plugins.
Thinking about plugins, I have found a very interesting article, where the user was talking about different awesome plugins. I have decided to try some of them.

1. Atom TernJS. This plugin provides intelligent autocomplete with type information. Such a useful thing for coders. Word ‘JS’ in the name, says that this plugin works only with JS.

Here is the example how it works:

plugin1

2. DocBlockr. This plugin useful for writing comments, because it automates many things, like placing a comment line automatically in next line, automatically pre-filling function parameters and so on.

3. GIT Projects. Awesome plugin for open source that allows to interact with github. You an open your git projects in Atom.

4. Linter ESLint. Another beautiful plugin that catches errors before you even run the code.

lint

5. Pigments. This plugin is very good for working with CSS because it shows colour for colour strings.

colours

To sum up, I really like Atom. Such a great tool! I wish I tried it before.


by osd600mytryniuk at March 08, 2017 04:50 AM


Dang Khue Tran

OSD600 Winter 2017 Release 0.2

For this release, I have worked on this bug and this is the pull request. In this blog post I am going to tell the journey of how I got to the pull request.

Learning the Callback/Promise concept

Before going into fix this issue, I know that I need to understand this concept in order write any code or perform any task in this repo.

This concept is similar to the concept of lambda in Java, or closure in Swift. The main purpose of this concept is to perform asynchronous task such as network operation or file operation.

In a nut shell, we are passing functions into functions. They will be executed when the function finish its task. For example, taking a piece of code from my pull request:

      _fs.writeFile(file, data, function(err) {
        if (err) {
          return console.error("Cannot write file to project: " + err);
        }
      });
 I want to write data into file and passing in a function to handle error. If there is an error, the function  passing in will receive an error object and will be displayed on the console.
The worst part of learning this concept is figuring out what variables I will be receiving in order to perform task with it, but this can be done by reading the documentation.

Locating the default files and how Thimble create a default project

I was advice that there is already a default project folder here. That’s a good start to find where a new project is created.

I searched for the string “empty-project” within the repo and found it here being declared as a constant: DEFAULT_PROJECT_TITLE. I began the search for usage of the constant, but the results weren’t helpful for my use case.

How thimble create a new file

I started another approach where I look for the term “newfile”. There were a lot of result but I came across this code that is doing what I needed to do:

      $.get(location).then(function(data) {
        fileOptions.contents = data;
        bramble.addNewFile(fileOptions, function(err) {
          if (err) {
            console.error("[Bramble] Failed to write new file", err);
            callback(err);
            return;
          }
          callback();
        });
      }, function(err) {
        if (err) {
          console.error("[Bramble] Failed to download " + location, err);
          callback(err);
          return;
        }
        callback();
      });

This code “ask” Bramble to create a new file.

I also found that default files are actually get from “/default-files/html.txt”

How to get a reference of Brimble

So I tried to put the above code snippet into here. However, it was not that simple, I need to have an instance of bramble. I searched for the term bramble in order to find how bramble was passed in and I found this:

      Bramble.once("ready", function(bramble) {

        // For debugging, attach to window.

        window.bramble = bramble;

        BrambleUIBridge.init(bramble, csrfToken, options.appUrl);

      });

We can only have the instance of Bramble once it is fully loaded.

When combining the two code snippets, the code produce an index.html but it’s still showing this message:

8ddc6f00-026d-11e7-8335-e015ea892f2a.png

Since what the above code doing is waiting for Bramble to be ready in order to create the index.html file, so when Bramble is loaded the file is not there yet, therefore message will shows up.

Getting the FileSystem

Since “asking” Bramble to create the file isn’t a good solution to fix this issue, so I decided to find another way to write the file to the project within Thimble.

I did a quick google on how to write a file to the file system and came up with this code:

         var file = getRoot() + "/index.html";
         _fs.writeFile(file, data, function(err) {
           if (err) {
             return console.error("Cannot write file to project: " + err);
           }
         });

However, I had trouble to declare var fs = require(“fs”). I am still not sure why but I was advised by professor that I can declare it like this:

var _fs = Bramble.getFileSystem();

Then, everything works like a charm.

Conclusion

This is a much more challenging than the previous release as it required me to look into more detail of how the files are handled in Thimble and everything was scattered everywhere. I needed to connect the dots in order to make it work for my situation.

I have learnt a lot through working from this release. I learn the concept of callback and had a chance to apply it and I had more experience of reading someone else code. The method of searching for a particular string in the project in order to find how a task is done in a project has helped me a lot to learn the code and feel more comfortable with the code.


by trandangkhue27 at March 08, 2017 04:06 AM


John James

Nunjucks adventure (Release 2 for OSD)

 

At the start of this release 2 assignment, I told myself I’m gonna try something with javascript I didn’t want to do another CSS fix. So trying to do something more challenging I came across a bug that made developers working on the front end be forced to reload the server, to notice their HTML changes. I was really curious about this mainly because I’ve never heard of this type of problem. After reading the bug issue and the information that my professor wrote, I felt like this was a new challenge that I could complete. At first, it starts off fine. Reading the API docs and looking what other developers that had this problem had to say.

 

After reading the docs and learning as much as I could about nunjucks I decided it was time to tackle this. I first tried to recreate the Issue, so that I can compare the difference to confirm that my code was actually changing something. I noted down the information that I collected from the recreate and then started to work on the code. I thought since I learned so much about nunjucks that I knew exactly how to fix this.

 

So I wrote my code, restarted the server and started testing. Well then to my surprise it, didn’t work and I was in shock. I started to question everything about what I wrote and started to get frustrated. I kept telling myself, I must’ve done something wrong because everyone said this is how to fix it. Kinda felt like a detective reading the code character by character and comparing to other people’s fixes. But they were almost identical, and now I was afraid that maybe I couldn’t fix this. So I decided after of a week looking for why this didn’t work and a deadline coming up I should make a pull request and see what other people say. So I made my request and waited, nervously waiting for someone to say “Oh this is an easy fix” or someone says “Ok, your not crazy there’s another part of the project that is fighting you”.  So then one of the first suggestion came in, and it was cleaning up the code to reduce the length of the line, which I did and told myself “Ok, so maybe I did it correctly?”. Then Luke mention that it was taking a long time for the page to reload, and then I knew the moment of truth will come soon.  Then my Prof David Humphrey came in and saved the day, he knew exactly what was happening and in 10 minutes he fixed something that I tried to do in a week. Then everything was fine, updated my pull request and now I’m just waiting for to be approved.

Here is the code I had to work with:

let engine = new Nunjucks.Environment(paths.map(path => new Nunjucks.FileSystemLoader(path)), { autoescape: true });

And changed it to:

let nunjucksOptions = {
noCache: true,
watch: true,
autoescape: true
};

let engine = new Nunjucks.Environment(paths.map(path => new Nunjucks.FileSystemLoader(path, nunjucksOptions)));

Overall I’m glad for this experience, it had a lot of up and downs and I feel like I improved as a programmer. I learned the importance of documentation and communication with other developers.  I also embarrassed myself a little on the issue, by testing the wrong thing, and that made me afraid to talk for a whole week, but I told myself I’m not the first person to make a mistake like this and I won’t be the last. So I decided to get over it and try to finish this bug. I’m now looking forward to trying a bigger bug for release 3, now that I’m more confident with the open source world.

 


by johnjamesa70 at March 08, 2017 01:05 AM