Planet CDOT

December 12, 2018


Brendan Hung

Checking up on the Extension

Before ending the semester I thought it’d be a good idea to give the extension a once over before I said goodbye to it. After combing through all of the features, I did notice that one wasn’t updating properly and was wondering why.

Turns out silly me made a typo during one of my earlier pull requests, which ended up  in some broken code. Regardless that was an easy fix and it should be working fine now!

The rest of the pull requests that I have submitted earlier, though haven’t all been merged, don’t seem to have any outstanding problems either. I do hope to contribute more to the freeCodeCamp repo whenever I can since that is a project I really do enjoy. IF I ever learn any new tips and tricks during my studies, I will most likely share them there as well!

I’m pretty amazed that a bunch of us were able to collaborate together this month an create something that we could call our own. At the moment there are still a few outstanding pull requests and issues in the repo. Hopefully they get resolved as well. I am anticipating the day that we can release this for other users to use as well!

It truly has been a great semester here at OSD600!

by bhung6494 at December 12, 2018 09:38 PM


Yeonwoo Park

I thought I added the missing test case

In addition to the contribution to Pandas Documentation updates(See the previous post), I tried to look for other contribution which I can handle. I realized that Pandas has a code coverage tool, called codecov, which checks how many test cases are covered comparing to the original code. I looked into it and found one missing test case which I thought I can handle it.

pandas_codecov.png

I found the test code for this function register_option under the file pandas/tests/test_config.py.

. . .
def test_register_option(self):
        self.cf.register_option('a', 1, 'doc')

        # can't register an already registered option
        pytest.raises(KeyError, self.cf.register_option, 'a', 1, 'doc')

        # can't register an already registered option
        pytest.raises(KeyError, self.cf.register_option, 'a.b.c.d1', 1,
                      'doc')
        pytest.raises(KeyError, self.cf.register_option, 'a.b.c.d2', 1,
                      'doc')

        # no python keywords
        pytest.raises(ValueError, self.cf.register_option, 'for', 0)
        pytest.raises(ValueError, self.cf.register_option, 'a.for.b', 0)
        # must be valid identifier (ensure attribute access works)
        pytest.raises(ValueError, self.cf.register_option,
                      'Oh my Goddess!', 0)

        # we can register options several levels deep
        # without predefining the intermediate steps
        # and we can define differently named options
        # in the same namespace
        self.cf.register_option('k.b.c.d1', 1, 'doc')
        self.cf.register_option('k.b.c.d2', 1, 'doc')

According to the codecov, there is a missing test case when the key is a reserved word. So, I filed the issue, and started working on this.

On the config.py code, there was an array to represent ‘reserved’ keys.

_reserved_keys = ['all']  # keys which have a special meaning

And there was no test case to check if the key was a reserved word(i.e. ‘all’), I added the test case for this.

# can't register a reserved key option
pytest.raises(KeyError, self.cf.register_option, 'all', 1,
              'doc')

This test case simply checks if the key is a reserved word, and raises the KeyError. To test new case, I followed the Pandas Contribution guide; I could basially run pytest command.

pytest pandas/tests/test_config.py

It successfully passed the new test case, and I posted new Pull Request. It also passed the CI tools, so I did not break anything so far.

Unfortunately, my issue and PR were closed by the maintainer since it was a ‘duplicate’ test. It may be because my explanation of this was not enough, or the length of the commit was too short(basically three line). Now I am a bit confused in regards to the usage of codeconv. I will try different missing test cases if I find, and research how to read the test code properly.

by ywpark1 at December 12, 2018 07:39 PM


Alex Kong

ESLint for Mozilla Firefox

I’m sure Firefox, the browser from the Mozilla Foundation needs no introduction. Even if Google Chrome is the preferred web browser for most people it would be hard to find someone who doesn’t know what Firefox is. Personally, Firefox has been my browser of choice for nearly a decade and a half so I was rather excited to add a contribution of my own to the browser that I use almost daily.

ESLint

ESLint is a code linter that checks if code is standard compliant before it is submitted to help avoid wasting time during code reviews. As Firefox’s codebase is so large that some of the older pieces of source code have not been updated to the standard so they are currently ignored by ESLint. For my bug I worked on enabling ESLint for the directories dom/abort/, dom/asmjscache/, dom/battery/, dom/broadcastchannel/ and dom/console/.

Set Up

I have development environments for all three major operating systems set up to some capacity (mostly due to virtual machines) and my first step was to decide which environment to develop with. After skimming through the documentation for getting the source code and the tools I would be using to submit the code I decided to go with MacOS.

Getting the Source Code

Mozilla uses Mercurial to manage their source code. To install Mercurial I ran:

brew install mercurial

which installed Mercurial to my machine. Next I cloned the Firefox source code by running:

hg clone https://hg.mozilla.org/mozilla-central/

This can take anywhere between 30 minutes to an hour.

As I would be implementing lint fixes I did not build the source code.

To prep myself for the submission process I downloaded ancanist, libphutil and moz-phab from their repositories. Then I added arcanist/bin and moz-phab directories to my system path to enable the arc and moz-phab commands.

ESLint Fixes

I removed the directories that I’ll be enabling ESLint for from the .eslintignore file which holds the list of directories which ESLint is supposed to ignore and I checked the number of ESLint errors I had by running

./mach eslint dom/

246 errors. Quite a lot! Before we panic let’s see how many we can eliminate using ESLint’s automatic fix command:

./mach eslint --fix dom/

33 errors now. Much more manageable. Let’s commit these changes before moving on. My particular commit command was:

hg commit -m "Bug 1508988 - Enable ESLint for dom/abort/, dom/asmjscache/, dom/battery/, dom/broadcastchannel/ and dom/console/ (automatic changes). r?standard8!"

Note that during the submission process the bug ID and the reviewer (in my case standard8) will be parsed out of the commit message.

The Fixes

The errors I encountered was generally one of the following:

no-undef

  • The script was a worker and the line /* eslint-env worker */ had to be added to let ESLint know
  • The variable was undefined so I had to define it with let

no-shadow

The variable is already declared in an upper scope so the variable is shadowed if it’s re-declared.

  • This was fixed by changing the var decoration to let as let is scope aware
  • Using a new variable for function arguments

no-redeclare

The variable was redeclared so I removed the var

After I was left with 9 errors that I wasn’t sure how to handle

Unfortunately I couldn’t search through the source code as Mozilla’s source viewer DXR was unreachable as I was working on the bug so it hampered my ability to search through the source code so at this point I was stuck. I couldn’t figure out where checkForEventListenerLeaks, jsFuns and complete came from so I couldn’t safely exclude them.

As for id, I am fairly new to Javascript and I was relatively unfamiliar with the syntax. I could recognize that a deconstructing assignment was occurring but I wasn’t sure what it was deconstructing and I wasn’t entirely sure how this should be handled.

I talked to my code reviewer Mark Banner who directed me to another source code search page, Searchfox and guided me through the process of finding jsFuns and complete which are injected into the global scope. As these variables are globally defined so they could be handled by adding the comment /* globals jsFuns:false, complete:false */.

So the loop on line 88 was destructing an array, not an object so it was important that something was there so the line was changed to:

for (let [, events] of _consoleStorage) {

According to Mark, the checkForEventListenerLeaks errors occur because the Firefox ESLint setup hasn’t been taught everything about the Firefox setup and the simpliest fix is to add the line /* import-globals-from ../../events/test/event_leak_utils.js */ after the Javascript script tag.

These fixes fixed all the ESLint errors.

I committed my code in the exact same way as before except the commit message changed “automatic changes” to “manual changes”.

Submitting the Changes

To submit my changes I followed the Mozilla Phabricator User Guide and successfully posted my patches without any relative issues.

by Alex Kong at December 12, 2018 10:02 AM

pySearch Dev Log: More Stories from a First Time Maintainer

pySearch is a command line tool I created when I first started learning Python to initiate web searches when I was using CLI and as pySearch started to pick up steam and contributions it also became the first project I acted as a maintainer for. Before maintaining pySearch it was hard to truly appreciate how much time and effort is required to maintain an active project.

When people first started contributing to pySearch it would be an understatement to say I underestimated the amount of time and effort it would take to maintain the project. It’s easy to say that a maintainer merges and mentors contributors on how to contribute but it’s hard to tell the amount of time and effort that involves until you’re on the other end.

Tests

In my previous blog posts on pySearch I mentioned how tests are important to avoid regressions as well as how I added a test framework to pySearch using pytest. While implementing tests to pySearch did help with reducing the amount of time it took to test code changes it did not change the amount of time it took to test new features.

One reoccurring problem that’s made testing pySearch features extremely difficult and time consuming is pySearch’s cross-platform support. pySearch is designed to run on all major OS and operate similarly, if not identically on different platforms. However this makes testing and troubleshooting the code rather hard as this means a maintainer needs to test the code at least twice because of the standout differences between Unix and Windows.

A standout situation where this was important was when browser selection was implemented. pySearch uses the webbrowser Python module to open the web browser after pySearch builds the link. The webbrowser module automatically registers web browsers it can detect so different browsers can be invoked as long as the module detects them. During testing we noticed that while Linux and MacOS registered web browsers relatively consistently this did not work correctly on Windows. After looking at the documentation and testing the behavior of the module we found that the browsers’ install directories had to be added to Windows environment variables to be detected by the webbrowser module. After this we decided that we would make a note of this in the README however this would not have been discovered if we had not tested on multiple OS.

Mentoring

Open source has this interesting dynamic where you get code from others and you give code in return .One of my most intriguing mentoring experiences occurred when I was mentoring a collaborator when they got stuck on implementing a ping to verify the validity of a search url. Originally I had suggested that they explore the ping subsystem call as it is implemented universally on both Unix and Windows systems. Unfortunately this did not work as ping only checks if the domain is up and does not verify the existence of resources and it doesn’t check if a domain simply redirects to a valid domain. For example: stackoverflow.ca would redirect to stackoverflow.com but if the browser tries to access a resource at stackoverflow.ca (eg. http://stackoverflow.ca/search?q=test) the URL will fail.

The collaborator was stumped and asked if I had any ideas. I knew that a curl call would work for Unix based OS but curl did not exist on Windows so I searched for a Powershell equivalent. After testing alternatives I found the Invoke-RestMethod call to be exactly what I was looking for and I reported my findings to the collaborator along with the example call that I tested. This experience was completely new to me as, until now I have never run into a situation where I had to test a fix that I wasn’t going to implement myself.

by Alex Kong at December 12, 2018 08:34 AM


Jagmeet Bhamber

DPS909 0.4 Release: Week 3

Introduction

This week wrapped up the 0.4 Release for my DPS909 course. In this Release, I was to make 3 larger pull requests, with larger representing contributions which were more challenging and required more effort than previous contributions. Unfortunately for this time, I was unable to finish all of my pull requests. Despite that, I was able to reflect on my mistakes and have learned things about software development that will definitely help me later on in the future.

mozilla-eslint

The first issue I signed up for was the mozilla-eslint issue. I was able to fix most of my issues, cutting the original number from ~270 to 10.

What I learned here was that I should’ve spent my time better, and asked questions more. I was a bit scared to tackle this issue on when I saw the 270 errors, and made the mistake of pushing it off. When I started working on the problem, I was able to quickly cut down the number of errors to a much smaller size, which is when I had to start figuring out individual solutions for each error.

I also think I could’ve done better by asking my classmates’ questions in our Slack channel, where we were all working on this issue together (but fixing issues in different directories).

In the end, I think this was a great chance to work on a large and prestigious project like Firefox, as well learn a lot, and I’m feeling a bit disappointed in myself that I didn’t fully take advantage of this opportunity presented to me.

GitHub-Dashboard

The second issue I tried to work on was adding ESLint to this project. I decided to work on this since it was something that we had covered in class (going over the Prettier project). I also thought it would be a good way to incorporate what I (would have) learned in the Mozilla issue and use that knowledge to fix something else. I have started working on this issue, but have run into trouble configuring ESLint in the project. I am still working on it.

Conclusion

To conclude, I think I dropped the ball a bit on this final release, with some poor time management and lacking a bit of initiative. I think this could have gone much better if I put in a bit more effort like the previous releases.

by Jagmeet Bhamber at December 12, 2018 07:53 AM


Alex Kong

Fixing the Mojave Crash for the Clementine Music Player

Clementine, based off Amarok 1.4 has been one of my favorite music players because of it’s cross platform support, it’s wide format support and it’s clean and easy to use interface. Back when I was in high school I would constantly swap between a Mac and PC and Clementine made it very easy to keep a consistent experience between my computers.

Enter Mojave

While I haven’t owned a Mac for a few years I’ve always loved MacOS and Linux environments for development and I’ve used virtual machines of the two OS (well an Ubuntu image for Linux) to develop just so I can test for all 3 major operating systems. MacOS virtualization isn’t always spot on so after I first created my VM I ran a few basic tests to see what capabilities I had, one of which was sound. To test my sound capabilities I decided to use Clementine to play a FLAC file but sure enough it crashes right after it launched.

Now when Mojave was announced I remembered the hype around the security updates so I thought that a security change was the cause of the crash but I wasn’t sure. However I did remember that previously Clementine would ask me to give it Accessibility permissions to access the media keys so I went into my Security and Privacy panel and allowed Accessibility access to Clementine. Sure enough this fixed the issue, but this leads to the question: Why did the Accessibility prompt not show up and why did Clementine crash without the permission?

I found an issue on Clementine’s Github (issue 6148) which described my exact issue. There a maintainer of the project gave a few code pointers to the code that handled the accessibility prompt.

Before continuing I would like to preface the following by saying this was the first time I touched code for MacOS so this was an entirely new learning experience for me. As I haven’t dealt with Apple’s documentation or APIs before.

During my research I found that the code Clementine used(AXAPIEnabled) was depreciated back when OSX 10.9 (or Mavricks) was released. After looking for how AXAPIEnabled() was replaced I found that AXIsProcessTrustedWithOptions() could be used instead. By looking at the documentation for AXIsProcessTrustedWithOptions() and going to it’s parent header page I found that I had to include the header <ApplicationServices/ApplicationServices.h> to use the function resulting in the following changes.

To test my code I had to build the Clementine dmg file. I did this by following the instructions on the Clementine wiki.

The process involved pulling the Clementine OSX image with docker pull clementine/mac:1.3

SSH-ing into the container using docker run -it clementine/mac:1.3 /bin/bash then following the commands on the Clementine wiki except I pulled my changes into the image. After following this process I was left with an executable dmg which contained the Clementine application.

After running the application I could confirm that there was no crash and when I checked the keyboard controls there was an option to open the System Preferences.

Seeing this, I was confident my fix worked. Unfortunately media keys still does not work in Mojave however that’s another bug. For now, this patch fixes the crash that occurs in Mojave if the Accessibility control was not enabled.

by Alex Kong at December 12, 2018 06:27 AM


Yuecheng Wu

Final Thoughts on Open Source

What a semester in Open Source! I have learned so much and progressed so much as a programmer. Looking back now, I felt like all those struggles during the journey has really been worth it. I can go on and on about things that I learned and how much I have enjoyed the Open Source Course with Professor David Humphrey. I hope I am able to address some key points in this blog post.

When I started this semester, I had very little experience with GitHub. I remembered I was already panicking for Release 0.1 to write simple tests for the filer repo. I did not know how to use git commands and had no idea how to write tests. Thanks to those thorough weekly notes created by David. They are really helpful and guided me step by step to get started. After Release 0.1, I felt more comfortable about git commands and fixing issues on git repositories.

 Then comes Hacktoberfest which is Release 0.2. When I saw what David wanted us to do, I was like there is no way I can do all 5 pull requests in one month, but due to the pressure of failing the course, I had to somehow force myself to start. My first pull request was really easy. It was only adding some comments to some codes, because I wanted to get my confidence up. I wanted to get the ball rolling before I move on to more difficult stuff and it worked. After I finished my first pull request, I thought to myself, this can’t be too hard. Then I moved on to something a little bit more difficult. In the end, not only I have finished all five pull requests, but also I was surprised on how many languages I have worked with and I even learned some basics of a new language – Python. 

After Release 0.2 I felt pretty confident in my skills because I know they have improved. Then comes Release 0.3 and 0.4. which are similar to Hacktoberfest but we have to go deeper and try to fix larger issues. This time, I felt a lot more comfortable and confident in my abilities, so I was able to look into larger repositories to fix bigger issues. I also helped with couple internal Seneca projects that our class has come up together which was really fun and it felt really good to contribute on something that we basically started from scratch ourselves.

There are many other things other than programming knowledge that I have picked up along the way. For example, I have understand that patience is very important when working with any programming languages especially when fixing errors or debugging. If I try to rush things it will usually result in more errors because some mistakes I made along the way. What I learned is if I have worked on something for a while but not progressing as I’d like, I would go take a break and do something else then comeback, because I would have a fresh mind and probably will be able to look at the issue differently from another perspective. This is only one of the many things I learned throughout the semester. This course has certainly helped me grow as a programmer and more importantly, it has helped me gaining valuable experience that I will be able to carry with me for a very long time in the future. 

Finally, I would like to thank Professor David Humphrey for teaching us the course, and for always being there for us when we needed help. Your words of encouragement has given me confidence and pulled me up when I was down on myself. Thank you for being a great mentor for us and I wish we will see each other in the future. Merry Christmas!

by ywu194 at December 12, 2018 03:26 AM


Ebaad Ali

SpookyHash==SpookyFast ( Stage 3 )

For a couple months now I've been analyzing Bob Jenkins SpookyHash function and seeing if there was anyway I could possibly optimize it in C. In particular… Read more "SpookyHash==SpookyFast ( Stage 3 )"

by ebaadali at December 12, 2018 02:49 AM


Michael Overall

Using Arrow keys to cycle through Mozilla Screenshots



 The Mozilla-Services' screenshots add-on for the Mozilla Firefox browser is a nifty browser add-on that allows you to snip a portion of your browser's screen to create a screenshot that you can then either download to your computer, or host on Mozilla's servers. It also comes with some simple image editing tools to allow you to draw on top of your screenshot, etc.. Screenshots consists of both the add-on itself, and the server that it communicates with. The add-on allows you to capture parts of your browser's window and then save the resulting screenshot to your computer or the cloud. The server provides a UI to browse, edit, and download these screenshots after they have been uploaded by the add-on.

The server is written with React, and NodeJs with express routing, and uses a Postgre SQL database to store screenshots and their metadata. I believe the add-on uses IndexedDB to store the current screenshot before it is either downloaded or saved to the cloud.

The part I would be working on for my issue would be the server. The task was to allow the use of  left and right arrow keys to transition between screenshots that had been stored on the server when you are viewing them individually.

An individual shot view with  newly coded links to transition between images. Pressing the left and right arrow keys also activates the next or previous links appropriately.



It required (more or less) the following steps for me to set up the code environment:

  • download and install the latest version of Firefox developer edition
  • Install the Postgresql server and service, and related DBMS with the help of these instructions for an Ubuntu machine
    • o   Create a file to hold a reference to to hold the apt repo:
      > sudo touch /etc/apt/sources.list.d/pgdg.list
      o   Add this line to the new file:
      deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main
      o   Get the key for the apt-get repo:
      wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add –
      o   Sudo apt-get update
      o   Install postgre with the apt repo:
      sudo apg-get install psgtresql-11 pgadmin4
      o   Add yourself as a user so you can actually use the damn ‘psql’ command:
      sudo -u postgres createuser --superuser $USER
      o   Create a database under your username:
      sudo -u postgres createdb $USER
      o    
      ·         Install the postgres client:
      o   sudo apt-get install postgresql-client
       
     
  •  After pulling the repo, running npm intall to pull in the required dependencies yielded the following error: npm ERR! 404 Not Found: har-validator@5.1.2
    • To solve this issue I tried:
      • running npm update
      • uninstalling, and reinstalling the har-validator via npm in the hopes that it could find a valid version, but trying to uninstall gave the same 404 error as trying to install the package did
      •  Tried installing and using yarn cli because there is a yarn package for har-validator which may be mismatched with the npm version ( see this github issue)
      • tried blowing away the package-lock.json file, and allowing it to re-generate via the package.json file (this may have been inadvisable, but was an attempt to get anything to work)
     
    Since none of that worked, I continued on with the severity of my tinkering by manually inspecting and editing the contents of package.json. Within, I found various references to har-validator@5.1.3, which this github issue had suggested was messing things up. As a quick fix, rather than allowing npm's automation to try to grab the most recent (and broken) package, I edited those instances back to 5.1.2, and npm finally decided to make it through the package installation process.

    When I tried to build and run the repo's server, however, I found I had to manually install babel-cli, browserify, node-uglify, and eslint. This may have been due to my obliteration of the package-lock.json file.


    Neat. Now the  server would run, but would complain that it couldn't access the database. I wanted to get the database to run without the need for passwords since it would never see the light of day beyond localhost, and tried to edit its pg_hba.conf file to make this happen, but the server wasn't having it. Eventually I had to manually set the password to my database so that the server would be able to access the newly created tables.

    If you don't choose to run the code repo's addon (./bin/run-addon), trying to use the addon in the mozilla browser at the server's address (localhost:10080), would cause the browser to redirect to Mozilla's live server to serve pages for viewing you screenshots, and try to store all the screenshots you took on the cloud instead of the development database.

    Since the Screenshots addon also uses IndexedDB, you need to change Firefox's permission by going to about:config in its URL bar, and enabling various dom.indexedDB features.

    be sure to enable IndexedDB in Firefox Developer Edition


    Once all that was set up, it was time to debug. I've had success attaching debuggers to servers running React before in VS Code, but that was with using the Chrome Debugger extension. Since the Screenshots add-on was for Firefox, this would be out of the question.  Unfortunately, the Screenshots code repo only talked about debugging the add-on itself via about:debugging (the URL bar to bring up a page on debugging). This featured a "learn more" link that only seemed to talk about how to run the add-on, not how to debug it in the context of a server.

    This information is quite informative... however I wanted to work on the server the extension talked to, not the extension itself.
     Well, the server ran via NodeJS, and used express routing, so perhaps I could attach the VS Code IDE to the server process after it was started with node. The server was actually started up using a bash script file called "run-addon", so I went into that and added the --inspect flag to its call to run the server with node, and then tried to edit VS Code's launch.json file, which defines how it launches its debugger, to try to attach the debugger to port 9229 of a my running server. 9229 is the default port that Node will try to talk to debuggers on if you start a server with the --inspect option.



    This didn't yield any positive results, nor did trying to install a mozilla debugger extension I found for VS Code, so it was time to use my old friend, console.log().

     Since I didn't really know how the server actually worked I used console.log() on everything, and eventually found a database query that would load a screenshot to the individual screenshots view. I added some logic to that to pull in the URLs for all screenshots stored in the database so I could then determine what screenshots came before and after the one that would be currently displayed in the user's browser.
     Since the React app used a vaguely MVC style model, where a file called view.js  held a React Component, that had properties delivered to it via model.js (which I think had structures for data for the client and the server to map to and from the view), which was associated with a controller file, and eventually led back to server-shot.js, which pulled in screenshot objects from the Postgre database.I added some properties to the React component and its related model to hold the next and previous urls, and added some on screen links to allow users to traverse these shots.

    This didn't really do anything to get arrow keys working, so I then had to add an event listener to check if the user hit the left or right arrow key. This proved to be a bit more difficult than anticipated, because React views are written with JSX, and they don't really like you inserting <script> tags to add your listeners inline with your page code. Trying to add a listener to redirect to the next and previous image in the react component's componentDidMount() callback function didn't really help because the properties that would be needed to determine the url to redirect to when the user pressed a key would not be populated at the time.

    This doesn't work since this.props.prevShotId is undefined when the component mounts:


    window.addEventListener("keypress", (event)=>{
              const keyName = event.key;
              if(keyName == "ArrowLeft"){
                window.location.replace(this.props.prevShotId);
              }
              if (keyName == "ArrowRight"){
                window.location.replace(this.props.prevShotId);
              }
    });

    however giving the previous and next image links I had already made unique id attributes, and replacing window.location.replace() with document.getElementById().click() did.


      So after all that messing about, I managed to get the Mozilla Screenshots server to transition between all of the shots in the database by using the arrow keys on your keyboard.

    by Michael Overall (noreply@blogger.com) at December 12, 2018 01:24 AM


    Danny Chen

    Project: Part3 – Optimizing and porting argon2 package using C and Assembler language(Progress 4)

    Requirements/ System Specifications.

    Argon2 Password hashing function package:

    https://github.com/P-H-C/phc-winner-argon2

    Machine 1:

    Aarch64 Fedora 28 version of Linux operating system

    Cortex-A57 8 core processor

    Two sticks of Dual-Channel DIMM DDR3 8GB RAM (16GB in total)

    Machine 2:

    Intel(R) Xeon(R) CPU E5-1630 v4 @ 3.70GHz

    Four sticks of 8GB DIMM DDR4 RAM at 2.4 GHz (32 GB of RAM in total)

    x86_64 Fedora 28 version of Linux Operating System

    Continuation of Project: Part3 – Optimizing and porting argon2 package using C and Assembler language(Progress 3) blog:

    I have test the modified code seen here:

    /*
    * Argon2 reference source code package - reference C implementations
    *
    * Copyright 2015
    * Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
    *
    * You may use this work under the terms of a Creative Commons CC0 1.0
    * License/Waiver or the Apache Public License 2.0, at your option. The terms of
    * these licenses can be found at:
    *
    * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
    * - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
    *
    * You should have received a copy of both of these licenses along with this
    * software. If not, they may be obtained at the above URLs.
    */
    
    #include <stdio.h>
    #include <stdint.h>
    #include <stdlib.h>
    #include <string.h>
    #include <time.h>
    #include <unistd.h>
    #define BILLION 1000000000L;
    #ifdef _MSC_VER
    #include <intrin.h>
    #endif
    
    #include "argon2.h"
    
    /*
    static uint64_t rdtsc(void) {
    #ifdef _MSC_VER
    return __rdtsc();
    #else
    #if defined(__amd64__) || defined(__x86_64__)
    uint64_t rax, rdx;
    __asm__ __volatile__("rdtsc" : "=a"(rax), "=d"(rdx) : :);
    return (rdx << 32) | rax;
    #elif defined(__i386__) || defined(__i386) || defined(__X86__)
    uint64_t rax;
    __asm__ __volatile__("rdtsc" : "=A"(rax) : :);
    return rax;
    #elif defined(__aarch64__)
    return 1;
    #else
    return 0;
    #endif
    #endif
    }
    
    */
    
    
    /*
    * Benchmarks Argon2 with salt length 16, password length 16, t_cost 3,
    and different m_cost and threads
    */
    static void benchmark() {
    #define BENCH_OUTLEN 16
    #define BENCH_INLEN 16
    const uint32_t inlen = BENCH_INLEN;
    const unsigned outlen = BENCH_OUTLEN;
    unsigned char out[BENCH_OUTLEN];
    unsigned char pwd_array[BENCH_INLEN];
    unsigned char salt_array[BENCH_INLEN];
    #undef BENCH_INLEN
    #undef BENCH_OUTLEN
    
    struct timespec start, stop;
    double accum;
    
    uint32_t t_cost = 3;
    uint32_t m_cost;
    uint32_t thread_test[4] = {1, 2, 4, 8};
    argon2_type types[3] = {Argon2_i, Argon2_d, Argon2_id};
    
    memset(pwd_array, 0, inlen);
    memset(salt_array, 1, inlen);
    
    for (m_cost = (uint32_t)1 << 10; m_cost <= (uint32_t)1 << 22; m_cost *= 2) {
    unsigned i;
    for (i = 0; i < 4; ++i) {
    double run_time = 0;
    uint32_t thread_n = thread_test[i];
    unsigned j;
    for (j = 0; j < 3; ++j) {
    /*clock_t start_time, stop_time;
    uint64_t start_cycles, stop_cycles;
    uint64_t delta;
    double mcycles;*/
    
    argon2_type type = types[j];
    
    /*start_time = clock();
    start_cycles = rdtsc();*/
    
    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &start);
    }
    
    argon2_hash(t_cost, m_cost, thread_n, pwd_array, inlen,
    salt_array, inlen, out, outlen, NULL, 0, type,
    ARGON2_VERSION_NUMBER);
    
    /*stop_cycles = rdtsc();
    stop_time = clock();*/
    
    /*delta = (stop_cycles - start_cycles) / (m_cost);
    mcycles = (double)(stop_cycles - start_cycles) / (1UL << 20);
    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);*/
    
    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &stop);
    }
    
    accum = ( (double)stop.tv_sec - (double)start.tv_sec )
    + ( (double)stop.tv_nsec - (double)start.tv_nsec ) / BILLION;
    
    double mcycles = accum * BILLION;
    mcycles = mcycles / (1UL << 20);
    uint64_t delta = accum * BILLION;
    delta = delta / (m_cost);
    
    printf("%s %d iterations %d MiB %d threads: %2.2f cpb %2.2f "
    "Mcycles \n", argon2_type2string(type, 1), t_cost,
    m_cost >> 10, thread_n, (float)delta / 1024, mcycles);
    
    run_time += run_time / (CLOCKS_PER_SEC);
    
    /*run_time += accum;
    printf("%2.4f seconds\n\n", (double)run_time);*/
    }
    
    /*run_time = 0;*/
    run_time += accum;
    printf("%2.4f seconds\n\n", run_time);
    }
    }
    
    }
    
    int main() {
    benchmark();
    return ARGON2_OK;
    }

    This was the bench.c file from the argon2 password hashing function.

    The following was the results from machine 2 running the modified program:

    Argon2i 3 iterations 1 MiB 1 threads: 3.54 cpb 3.54 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 3.20 cpb 3.20 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 2.73 cpb 2.73 Mcycles
    0.0029 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 2.92 cpb 2.92 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 2.34 cpb 2.34 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 2.40 cpb 2.40 Mcycles
    0.0025 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 1.97 cpb 1.97 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 1.87 cpb 1.87 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 1.94 cpb 1.94 Mcycles
    0.0020 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 3.21 cpb 3.21 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 3.00 cpb 3.00 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 2.81 cpb 2.81 Mcycles
    0.0030 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 1.40 cpb 2.79 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 1.21 cpb 2.42 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 1.04 cpb 2.08 Mcycles
    0.0022 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 1.44 cpb 2.88 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 1.36 cpb 2.72 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 1.37 cpb 2.73 Mcycles
    0.0029 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 0.99 cpb 1.99 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 1.11 cpb 2.21 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.05 cpb 2.11 Mcycles
    0.0022 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 1.67 cpb 3.35 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 1.54 cpb 3.08 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 1.51 cpb 3.02 Mcycles
    0.0032 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 1.41 cpb 5.65 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 1.09 cpb 4.38 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 0.98 cpb 3.92 Mcycles
    0.0041 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 1.28 cpb 5.13 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 1.21 cpb 4.85 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 1.23 cpb 4.93 Mcycles
    0.0052 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 0.79 cpb 3.18 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 0.79 cpb 3.18 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 0.81 cpb 3.22 Mcycles
    0.0034 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 1.00 cpb 4.00 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 0.89 cpb 3.58 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 0.91 cpb 3.64 Mcycles
    0.0038 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 1.47 cpb 11.79 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 1.13 cpb 9.08 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 0.97 cpb 7.80 Mcycles
    0.0082 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 1.27 cpb 10.18 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 0.87 cpb 6.95 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 0.88 cpb 7.00 Mcycles
    0.0073 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 0.91 cpb 7.31 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 0.80 cpb 6.42 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 0.59 cpb 4.70 Mcycles
    0.0049 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 0.82 cpb 6.53 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 0.83 cpb 6.63 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 0.81 cpb 6.47 Mcycles
    0.0068 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 1.89 cpb 30.20 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 1.33 cpb 21.22 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 1.17 cpb 18.70 Mcycles
    0.0196 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 1.17 cpb 18.80 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 0.81 cpb 13.03 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 0.79 cpb 12.57 Mcycles
    0.0132 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 0.80 cpb 12.79 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 0.56 cpb 8.97 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 0.53 cpb 8.45 Mcycles
    0.0089 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 0.60 cpb 9.57 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 0.64 cpb 10.22 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 0.68 cpb 10.83 Mcycles
    0.0114 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 1.64 cpb 52.53 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 1.50 cpb 47.89 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 1.49 cpb 47.84 Mcycles
    0.0502 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 1.28 cpb 41.08 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 1.29 cpb 41.17 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 1.38 cpb 44.31 Mcycles
    0.0465 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 0.86 cpb 27.46 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 0.74 cpb 23.58 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 0.65 cpb 20.68 Mcycles
    0.0217 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 0.68 cpb 21.81 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 0.69 cpb 22.09 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 0.68 cpb 21.73 Mcycles
    0.0228 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 1.61 cpb 103.11 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 1.58 cpb 101.05 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 1.58 cpb 101.25 Mcycles
    0.1062 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 1.44 cpb 92.42 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 1.18 cpb 75.76 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 1.18 cpb 75.28 Mcycles
    0.0789 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 0.76 cpb 48.48 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 0.65 cpb 41.49 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 0.63 cpb 40.49 Mcycles
    0.0425 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 0.58 cpb 37.08 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 0.61 cpb 38.88 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 0.61 cpb 39.02 Mcycles
    0.0409 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 1.72 cpb 220.68 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 1.65 cpb 211.20 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 1.61 cpb 206.66 Mcycles
    0.2167 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 1.12 cpb 143.16 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 1.11 cpb 142.53 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 1.11 cpb 142.67 Mcycles
    0.1496 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 0.68 cpb 87.52 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 0.68 cpb 86.96 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 0.68 cpb 86.78 Mcycles
    0.0910 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 0.59 cpb 75.56 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 0.55 cpb 70.96 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 0.58 cpb 74.02 Mcycles
    0.0776 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 1.75 cpb 447.73 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 1.62 cpb 414.48 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 1.62 cpb 415.25 Mcycles
    0.4354 seconds
    
    Argon2i 3 iterations 256 MiB 2 threads: 1.17 cpb 299.72 Mcycles
    Argon2d 3 iterations 256 MiB 2 threads: 1.07 cpb 274.17 Mcycles
    Argon2id 3 iterations 256 MiB 2 threads: 1.14 cpb 291.48 Mcycles
    0.3056 seconds
    
    Argon2i 3 iterations 256 MiB 4 threads: 0.70 cpb 180.25 Mcycles
    Argon2d 3 iterations 256 MiB 4 threads: 0.71 cpb 182.79 Mcycles
    Argon2id 3 iterations 256 MiB 4 threads: 0.70 cpb 180.23 Mcycles
    0.1890 seconds
    
    Argon2i 3 iterations 256 MiB 8 threads: 0.54 cpb 137.75 Mcycles
    Argon2d 3 iterations 256 MiB 8 threads: 0.54 cpb 139.23 Mcycles
    Argon2id 3 iterations 256 MiB 8 threads: 0.53 cpb 134.82 Mcycles
    0.1414 seconds

    This is strange as the original had a result of this:

    2292451852727619283Argon2i 3 iterations 1 MiB 1 threads: 10574.63 cpb 10574.64 Mcycles
    9176590593415145417Argon2d 3 iterations 1 MiB 1 threads: 10573.79 cpb 10573.79 Mcycles
    16050798784100622823Argon2id 3 iterations 1 MiB 1 threads: 10571.93 cpb 10571.94 Mcycles
    0.0100 seconds
    
    2290633554493452044Argon2i 3 iterations 1 MiB 2 threads: 10574.07 cpb 10574.07 Mcycles
    29783368801178634129Argon2d 3 iterations 1 MiB 2 threads: 10571.67 cpb 10571.67 Mcycles
    36635109851864293143Argon2id 3 iterations 1 MiB 2 threads: 10572.13 cpb 10572.13 Mcycles
    0.0160 seconds
    Note: The beginning of each line has a random set of numbers. The cpb and the Mcycles were really long meaning the CPU is slower to hash the result.

    I will now change the optimization level to -O3 and retest the program.

    Result:
    Argon2i 3 iterations 1 MiB 1 threads: 3.42 cpb 3.42 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 3.18 cpb 3.18 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 2.72 cpb 2.72 Mcycles
    0.0029 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 2.49 cpb 2.49 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 2.33 cpb 2.33 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 2.30 cpb 2.31 Mcycles
    0.0024 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 2.23 cpb 2.23 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.06 cpb 2.06 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 1.71 cpb 1.71 Mcycles
    0.0018 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 3.17 cpb 3.17 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 3.00 cpb 3.00 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 2.99 cpb 2.99 Mcycles
    0.0031 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 1.41 cpb 2.82 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 1.23 cpb 2.47 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 1.04 cpb 2.07 Mcycles
    0.0022 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 1.39 cpb 2.79 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 1.36 cpb 2.73 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 1.34 cpb 2.69 Mcycles
    0.0028 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 1.02 cpb 2.04 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 0.99 cpb 1.99 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.00 cpb 1.99 Mcycles
    0.0021 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 1.71 cpb 3.43 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 1.68 cpb 3.37 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 1.64 cpb 3.29 Mcycles
    0.0034 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 1.37 cpb 5.49 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 1.10 cpb 4.40 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 1.01 cpb 4.06 Mcycles
    0.0043 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 1.35 cpb 5.40 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 1.18 cpb 4.71 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 1.19 cpb 4.78 Mcycles
    0.0050 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 0.91 cpb 3.65 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 0.91 cpb 3.63 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 0.90 cpb 3.62 Mcycles
    0.0038 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 1.02 cpb 4.08 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 1.01 cpb 4.03 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 0.95 cpb 3.80 Mcycles
    0.0040 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 1.40 cpb 11.22 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 1.16 cpb 9.25 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 0.99 cpb 7.93 Mcycles
    0.0083 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 1.42 cpb 11.40 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 0.88 cpb 7.03 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 0.75 cpb 6.02 Mcycles
    0.0063 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 0.94 cpb 7.49 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 0.74 cpb 5.96 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 0.55 cpb 4.44 Mcycles
    0.0047 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 0.71 cpb 5.67 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 0.76 cpb 6.11 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 0.75 cpb 5.97 Mcycles
    0.0063 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 1.62 cpb 25.97 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 1.27 cpb 20.26 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 1.14 cpb 18.20 Mcycles
    0.0191 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 1.35 cpb 21.65 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 0.98 cpb 15.62 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 0.92 cpb 14.74 Mcycles
    0.0155 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 0.84 cpb 13.44 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 0.54 cpb 8.65 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 0.58 cpb 9.27 Mcycles
    0.0097 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 0.61 cpb 9.80 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 0.61 cpb 9.72 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 0.67 cpb 10.75 Mcycles
    0.0113 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 1.58 cpb 50.49 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 1.47 cpb 46.95 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 1.47 cpb 47.09 Mcycles
    0.0494 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 1.46 cpb 46.79 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 1.39 cpb 44.55 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 1.42 cpb 45.41 Mcycles
    0.0476 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 0.85 cpb 27.25 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 0.63 cpb 20.09 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 0.67 cpb 21.30 Mcycles
    0.0223 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 0.65 cpb 20.74 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 0.67 cpb 21.54 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 0.67 cpb 21.34 Mcycles
    0.0224 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 1.60 cpb 102.66 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 1.55 cpb 99.24 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 1.55 cpb 99.25 Mcycles
    0.1041 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 1.22 cpb 78.43 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 1.26 cpb 80.65 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 1.20 cpb 76.73 Mcycles
    0.0805 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 0.76 cpb 48.88 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 0.68 cpb 43.39 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 0.74 cpb 47.31 Mcycles
    0.0496 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 0.65 cpb 41.82 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 0.63 cpb 40.18 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 0.67 cpb 42.62 Mcycles
    0.0447 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 1.66 cpb 212.21 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 1.72 cpb 219.73 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 1.64 cpb 209.82 Mcycles
    0.2200 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 1.24 cpb 158.31 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 1.11 cpb 142.63 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 1.19 cpb 152.53 Mcycles
    0.1599 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 0.75 cpb 95.45 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 0.68 cpb 86.76 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 0.68 cpb 87.00 Mcycles
    0.0912 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 0.57 cpb 72.78 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 0.58 cpb 74.95 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 0.59 cpb 75.34 Mcycles
    0.0790 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 1.76 cpb 451.19 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 1.69 cpb 433.36 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 1.60 cpb 408.90 Mcycles
    0.4288 seconds
    
    Argon2i 3 iterations 256 MiB 2 threads: 1.16 cpb 296.43 Mcycles
    Argon2d 3 iterations 256 MiB 2 threads: 1.09 cpb 279.88 Mcycles
    Argon2id 3 iterations 256 MiB 2 threads: 1.18 cpb 301.38 Mcycles
    0.3160 seconds
    
    Argon2i 3 iterations 256 MiB 4 threads: 0.74 cpb 189.06 Mcycles
    Argon2d 3 iterations 256 MiB 4 threads: 0.68 cpb 174.25 Mcycles
    Argon2id 3 iterations 256 MiB 4 threads: 0.71 cpb 180.84 Mcycles
    0.1896 seconds
    
    Argon2i 3 iterations 256 MiB 8 threads: 0.50 cpb 128.98 Mcycles
    Argon2d 3 iterations 256 MiB 8 threads: 0.55 cpb 141.48 Mcycles
    Argon2id 3 iterations 256 MiB 8 threads: 0.52 cpb 132.25 Mcycles
    0.1387 seconds
    
    Argon2i 3 iterations 512 MiB 1 threads: 1.75 cpb 895.61 Mcycles
    Argon2d 3 iterations 512 MiB 1 threads: 1.65 cpb 844.13 Mcycles
    Argon2id 3 iterations 512 MiB 1 threads: 1.65 cpb 843.89 Mcycles
    0.8849 seconds
    
    Argon2i 3 iterations 512 MiB 2 threads: 1.10 cpb 563.01 Mcycles
    Argon2d 3 iterations 512 MiB 2 threads: 1.12 cpb 573.63 Mcycles
    Argon2id 3 iterations 512 MiB 2 threads: 1.12 cpb 575.07 Mcycles
    0.6030 seconds
    
    Argon2i 3 iterations 512 MiB 4 threads: 0.67 cpb 341.87 Mcycles
    Argon2d 3 iterations 512 MiB 4 threads: 0.69 cpb 351.20 Mcycles
    Argon2id 3 iterations 512 MiB 4 threads: 0.66 cpb 337.59 Mcycles
    0.3540 seconds
    
    Argon2i 3 iterations 512 MiB 8 threads: 0.50 cpb 255.14 Mcycles
    Argon2d 3 iterations 512 MiB 8 threads: 0.49 cpb 253.08 Mcycles
    Argon2id 3 iterations 512 MiB 8 threads: 0.50 cpb 258.21 Mcycles
    0.2708 seconds

    The result runs fairly fast. This is expected as the optimization level is -O3.

    Test 3 (Extra):

    I will be testing on a third machine.

    Specifications:

    8 core aarch64 X-Gene CPU
    Two sticks of DDR3 4096 MB RAM @ 1600 MHz
    Fedora 28 64-bit Linux Operating System

    Result:

    This is with optimization level -O2.

    Building without optimizations
    cc -std=c89 -O2 -Wall -g -Iinclude -Isrc -pthread src/argon2.c src/core.c src/blake2/blake2b.c src/thread.c src/encoding.c src/ref.c src/bench.c -o bench
    Argon2i 3 iterations 1 MiB 1 threads: 5.51 cpb 5.51 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 5.18 cpb 5.18 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 4.78 cpb 4.78 Mcycles
    0.0050 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 4.00 cpb 4.00 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 3.67 cpb 3.67 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 3.76 cpb 3.76 Mcycles
    0.0039 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 3.16 cpb 3.16 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.95 cpb 2.95 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 3.07 cpb 3.07 Mcycles
    0.0032 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 5.75 cpb 5.75 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 5.90 cpb 5.90 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 6.04 cpb 6.04 Mcycles
    0.0063 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 5.48 cpb 10.96 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 5.27 cpb 10.53 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 4.80 cpb 9.59 Mcycles
    0.0101 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 3.18 cpb 6.35 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 3.14 cpb 6.27 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 3.05 cpb 6.10 Mcycles
    0.0064 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 2.38 cpb 4.76 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 2.33 cpb 4.67 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 2.36 cpb 4.72 Mcycles
    0.0050 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 3.62 cpb 7.23 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 3.58 cpb 7.15 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 3.67 cpb 7.34 Mcycles
    0.0077 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 5.58 cpb 22.32 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 5.09 cpb 20.35 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 4.84 cpb 19.36 Mcycles
    0.0203 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 2.87 cpb 11.49 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 2.86 cpb 11.45 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 2.84 cpb 11.38 Mcycles
    0.0119 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 1.89 cpb 7.54 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 1.82 cpb 7.30 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 1.80 cpb 7.21 Mcycles
    0.0076 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 2.47 cpb 9.90 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 2.55 cpb 10.19 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 2.63 cpb 10.51 Mcycles
    0.0110 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 5.82 cpb 46.54 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 5.33 cpb 42.66 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 5.04 cpb 40.33 Mcycles
    0.0423 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 2.84 cpb 22.69 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 2.78 cpb 22.22 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 2.83 cpb 22.65 Mcycles
    0.0237 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 1.65 cpb 13.20 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 1.63 cpb 13.07 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 1.64 cpb 13.11 Mcycles
    0.0137 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 2.09 cpb 16.73 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 1.95 cpb 15.62 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 2.36 cpb 18.85 Mcycles
    0.0198 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 6.14 cpb 98.25 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 5.70 cpb 91.25 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 5.47 cpb 87.54 Mcycles
    0.0918 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 2.98 cpb 47.67 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 2.93 cpb 46.88 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 2.94 cpb 47.08 Mcycles
    0.0494 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 1.62 cpb 25.96 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 1.61 cpb 25.72 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 1.62 cpb 25.90 Mcycles
    0.0272 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 1.79 cpb 28.67 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 1.75 cpb 28.07 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 1.82 cpb 29.16 Mcycles
    0.0306 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 6.34 cpb 203.00 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 6.26 cpb 200.26 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 6.27 cpb 200.72 Mcycles
    0.2105 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 3.42 cpb 109.52 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 3.38 cpb 108.09 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 3.38 cpb 108.12 Mcycles
    0.1134 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 1.93 cpb 61.63 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 1.90 cpb 60.92 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 1.94 cpb 62.00 Mcycles
    0.0650 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 1.94 cpb 62.07 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 1.96 cpb 62.58 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 1.92 cpb 61.30 Mcycles
    0.0643 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 6.48 cpb 414.84 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 6.40 cpb 409.88 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 6.41 cpb 410.55 Mcycles
    0.4305 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 3.47 cpb 221.90 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 3.43 cpb 219.27 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 3.43 cpb 219.69 Mcycles
    0.2304 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 1.92 cpb 123.08 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 1.90 cpb 121.74 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 1.93 cpb 123.49 Mcycles
    0.1295 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 1.82 cpb 116.51 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 1.79 cpb 114.79 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 1.80 cpb 115.02 Mcycles
    0.1206 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 6.60 cpb 844.52 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 6.52 cpb 835.11 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 6.54 cpb 836.68 Mcycles
    0.8773 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 3.52 cpb 450.00 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 3.47 cpb 444.85 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 3.49 cpb 446.23 Mcycles
    0.4679 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 1.94 cpb 247.84 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 1.91 cpb 245.05 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 1.92 cpb 245.15 Mcycles
    0.2571 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 1.73 cpb 221.21 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 1.70 cpb 217.79 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 1.64 cpb 209.97 Mcycles
    0.2202 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 6.69 cpb 1712.64 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 6.62 cpb 1694.77 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 6.63 cpb 1696.72 Mcycles
    1.7791 seconds
    
    Argon2i 3 iterations 256 MiB 2 threads: 3.55 cpb 909.09 Mcycles
    Argon2d 3 iterations 256 MiB 2 threads: 3.51 cpb 899.22 Mcycles
    Argon2id 3 iterations 256 MiB 2 threads: 3.52 cpb 900.67 Mcycles
    0.9444 seconds
    
    Argon2i 3 iterations 256 MiB 4 threads: 1.95 cpb 499.72 Mcycles
    Argon2d 3 iterations 256 MiB 4 threads: 1.94 cpb 497.66 Mcycles
    Argon2id 3 iterations 256 MiB 4 threads: 1.94 cpb 496.66 Mcycles
    0.5208 seconds
    
    Argon2i 3 iterations 256 MiB 8 threads: 1.48 cpb 379.07 Mcycles
    Argon2d 3 iterations 256 MiB 8 threads: 1.55 cpb 398.15 Mcycles
    Argon2id 3 iterations 256 MiB 8 threads: 1.58 cpb 403.45 Mcycles
    0.4230 seconds
    
    Argon2i 3 iterations 512 MiB 1 threads: 6.75 cpb 3458.96 Mcycles
    Argon2d 3 iterations 512 MiB 1 threads: 6.68 cpb 3419.92 Mcycles
    Argon2id 3 iterations 512 MiB 1 threads: 6.69 cpb 3426.03 Mcycles
    3.5925 seconds
    
    Argon2i 3 iterations 512 MiB 2 threads: 3.58 cpb 1835.84 Mcycles
    Argon2d 3 iterations 512 MiB 2 threads: 3.55 cpb 1816.11 Mcycles
    Argon2id 3 iterations 512 MiB 2 threads: 3.55 cpb 1819.26 Mcycles
    1.9076 seconds
    
    Argon2i 3 iterations 512 MiB 4 threads: 1.97 cpb 1009.56 Mcycles
    Argon2d 3 iterations 512 MiB 4 threads: 1.95 cpb 997.45 Mcycles
    Argon2id 3 iterations 512 MiB 4 threads: 2.01 cpb 1028.11 Mcycles
    1.0780 seconds
    
    Argon2i 3 iterations 512 MiB 8 threads: 1.41 cpb 721.65 Mcycles
    Argon2d 3 iterations 512 MiB 8 threads: 1.64 cpb 839.50 Mcycles
    Argon2id 3 iterations 512 MiB 8 threads: 1.69 cpb 865.63 Mcycles
    0.9077 seconds

    This machine has a slight issue in terms of running quickly. This machine also had less memory than the other two machines. I guess this is expected as a result.

    Moving on to the next optimization level -O3.

    Result:
    Building without optimizations
    cc -std=c89 -O3 -Wall -g -Iinclude -Isrc -pthread src/argon2.c src/core.c src/blake2/blake2b.c src/thread.c src/encoding.c src/ref.c src/bench.c -o bench
    Argon2i 3 iterations 1 MiB 1 threads: 5.75 cpb 5.75 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 5.45 cpb 5.45 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 5.04 cpb 5.04 Mcycles
    0.0053 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 3.97 cpb 3.97 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 3.59 cpb 3.59 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 3.54 cpb 3.54 Mcycles
    0.0037 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 3.00 cpb 3.00 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.84 cpb 2.84 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 2.77 cpb 2.77 Mcycles
    0.0029 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 5.19 cpb 5.20 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 5.07 cpb 5.07 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 4.92 cpb 4.93 Mcycles
    0.0052 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 5.70 cpb 11.40 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 5.49 cpb 10.98 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 5.07 cpb 10.14 Mcycles
    0.0106 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 3.19 cpb 6.39 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 3.15 cpb 6.30 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 3.21 cpb 6.43 Mcycles
    0.0067 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 2.20 cpb 4.41 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 2.22 cpb 4.44 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 2.16 cpb 4.32 Mcycles
    0.0045 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 3.68 cpb 7.36 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 2.80 cpb 5.61 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 2.79 cpb 5.58 Mcycles
    0.0058 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 5.81 cpb 23.23 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 5.34 cpb 21.38 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 5.11 cpb 20.43 Mcycles
    0.0214 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 2.98 cpb 11.93 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 2.93 cpb 11.73 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 2.93 cpb 11.71 Mcycles
    0.0123 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 1.82 cpb 7.28 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 1.77 cpb 7.08 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 1.77 cpb 7.07 Mcycles
    0.0074 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 2.50 cpb 9.99 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 2.70 cpb 10.82 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 2.89 cpb 11.54 Mcycles
    0.0121 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 6.05 cpb 48.43 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 5.58 cpb 44.62 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 5.31 cpb 42.46 Mcycles
    0.0445 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 2.95 cpb 23.60 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 2.91 cpb 23.26 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 2.90 cpb 23.23 Mcycles
    0.0244 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 1.66 cpb 13.24 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 1.64 cpb 13.13 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 1.64 cpb 13.10 Mcycles
    0.0137 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 2.03 cpb 16.25 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 2.29 cpb 18.37 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 1.92 cpb 15.33 Mcycles
    0.0161 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 6.37 cpb 102.00 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 5.97 cpb 95.50 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 5.74 cpb 91.90 Mcycles
    0.0964 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 3.12 cpb 49.90 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 3.07 cpb 49.17 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 3.08 cpb 49.33 Mcycles
    0.0517 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 1.70 cpb 27.26 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 1.68 cpb 26.94 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 1.69 cpb 27.04 Mcycles
    0.0283 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 1.81 cpb 28.91 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 1.87 cpb 29.85 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 1.87 cpb 29.86 Mcycles
    0.0313 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 6.57 cpb 210.38 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 6.51 cpb 208.24 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 6.52 cpb 208.70 Mcycles
    0.2188 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 3.53 cpb 112.92 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 3.49 cpb 111.63 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 3.50 cpb 111.91 Mcycles
    0.1173 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 1.97 cpb 63.21 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 1.96 cpb 62.57 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 1.96 cpb 62.68 Mcycles
    0.0657 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 1.89 cpb 60.42 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 2.00 cpb 63.85 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 2.03 cpb 64.85 Mcycles
    0.0680 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 6.72 cpb 430.30 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 6.66 cpb 426.03 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 6.67 cpb 426.61 Mcycles
    0.4473 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 3.58 cpb 229.32 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 3.54 cpb 226.89 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 3.55 cpb 227.27 Mcycles
    0.2383 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 1.98 cpb 126.75 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 1.96 cpb 125.35 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 1.96 cpb 125.71 Mcycles
    0.1318 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 1.87 cpb 119.64 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 1.94 cpb 123.96 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 1.90 cpb 121.41 Mcycles
    0.1273 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 6.83 cpb 874.04 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 6.77 cpb 866.06 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 6.78 cpb 867.69 Mcycles
    0.9098 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 3.62 cpb 464.03 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 3.60 cpb 460.44 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 3.59 cpb 460.12 Mcycles
    0.4825 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 2.00 cpb 255.49 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 1.97 cpb 251.78 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 1.97 cpb 252.45 Mcycles
    0.2647 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 1.85 cpb 236.45 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 1.71 cpb 218.54 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 1.71 cpb 219.59 Mcycles
    0.2303 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 6.92 cpb 1771.62 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 6.86 cpb 1756.04 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 6.87 cpb 1759.49 Mcycles
    1.8450 seconds

    It looks like the result had a slight improvement in time.

    Conclusion:

    I do not know what could have removed those random numbers from the x86_64 basic test but I would consider this was a success in porting the argon2 password hashing function bench test tool to work on any Linux OS device such as Aarch64 or x86_64.

     

     

     

    by dcchen at December 12, 2018 01:11 AM


    Andriy Yevseytsev

    DPS909 FINAL BLOG POST

    During my 5th term on BSD program in Seneca College I had a chance to take a professional elective course and after talking to my friend who works in Google about the list of electives Seneca proposes he advised me to take an open source course because open source knowledge is critically important in the industry.

    During DPS909 I gained lots of knowledge and experience in GitHub and how the git works. I am 100% sure that communicating with the open source community will help me in my future workplaces, because any of us must have such soft skills.

    Moreover, in the second part of the course, I become a repository maintainer for one of the internal projects - Seneca Blackboard Extension, and this maintainer experience gave me a chance to feel myself not only as a contributor, but as a maintainer as well.

    Speaking about what I personally liked about DPS909 it is important to mention that we had a freedom, we could choose any project we want to work on and there were almost no limits, such approach gives students the ability to gain personalized experience on GitHub and in the community.

    Thank you, David, for teaching! See you in DPS911 next term!

    by Andriy Yevseytsev (noreply@blogger.com) at December 12, 2018 12:49 AM


    Danny Chen

    Project: Part3 – Optimizing and porting argon2 package using C and Assembler language(Progress 3)

    Requirements/ System Specifications.

    Argon2 Password hashing function package:

    https://github.com/P-H-C/phc-winner-argon2

    Machine 1:

    Aarch64 Fedora 28 version of Linux operating system

    Cortex-A57 8 core processor

    Two sticks of Dual-Channel DIMM DDR3 8GB RAM (16GB in total)

    Machine 2:

    Intel(R) Xeon(R) CPU E5-1630 v4 @ 3.70GHz

    Four sticks of 8GB DIMM DDR4 RAM at 2.4 GHz (32 GB of RAM in total)

    x86_64 Fedora 28 version of Linux Operating System

    Approach:

    I will test the changed code on machine 1. This is a continuation of the last blog titled: “Project: Part3 – Optimizing and porting argon2 package using C and Assembler language(Progress 2)”.

    Here is the modified version of bench.c from the argon2 password hashing function:

    /*
    * Argon2 reference source code package - reference C implementations
    *
    * Copyright 2015
    * Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
    *
    * You may use this work under the terms of a Creative Commons CC0 1.0
    * License/Waiver or the Apache Public License 2.0, at your option. The terms of
    * these licenses can be found at:
    *
    * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
    * - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
    *
    * You should have received a copy of both of these licenses along with this
    * software. If not, they may be obtained at the above URLs.
    */
    
    #include <stdio.h>
    #include <stdint.h>
    #include <stdlib.h>
    #include <string.h>
    #include <time.h>
    #include <unistd.h>
    #define BILLION 1000000000L;
    #ifdef _MSC_VER
    #include <intrin.h>
    #endif
    
    #include "argon2.h"
    
    /*
    static uint64_t rdtsc(void) {
    #ifdef _MSC_VER
    return __rdtsc();
    #else
    #if defined(__amd64__) || defined(__x86_64__)
    uint64_t rax, rdx;
    __asm__ __volatile__("rdtsc" : "=a"(rax), "=d"(rdx) : :);
    return (rdx << 32) | rax;
    #elif defined(__i386__) || defined(__i386) || defined(__X86__)
    uint64_t rax;
    __asm__ __volatile__("rdtsc" : "=A"(rax) : :);
    return rax;
    #else
    #error "Not implemented!"
    #endif
    #endif
    }
    
    */
    
    
    /*
    * Benchmarks Argon2 with salt length 16, password length 16, t_cost 3,
    and different m_cost and threads
    */
    static void benchmark() {
    #define BENCH_OUTLEN 16
    #define BENCH_INLEN 16
    const uint32_t inlen = BENCH_INLEN;
    const unsigned outlen = BENCH_OUTLEN;
    unsigned char out[BENCH_OUTLEN];
    unsigned char pwd_array[BENCH_INLEN];
    unsigned char salt_array[BENCH_INLEN];
    #undef BENCH_INLEN
    #undef BENCH_OUTLEN
    
    struct timespec start, stop;
    double accum;
    
    uint32_t t_cost = 3;
    uint32_t m_cost;
    uint32_t thread_test[4] = {1, 2, 4, 8};
    argon2_type types[3] = {Argon2_i, Argon2_d, Argon2_id};
    
    memset(pwd_array, 0, inlen);
    memset(salt_array, 1, inlen);
    
    for (m_cost = (uint32_t)1 << 10; m_cost <= (uint32_t)1 << 22; m_cost *= 2) {
    unsigned i;
    for (i = 0; i < 4; ++i) {
    double run_time = 0;
    uint32_t thread_n = thread_test[i];
    unsigned j;
    for (j = 0; j < 3; ++j) {
    /*clock_t start_time, stop_time;
    uint64_t start_cycles, stop_cycles;
    uint64_t delta;
    double mcycles;*/
    
    argon2_type type = types[j];
    
    /*start_time = clock();
    start_cycles = rdtsc();*/
    
    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &start);
    }
    
    argon2_hash(t_cost, m_cost, thread_n, pwd_array, inlen,
    salt_array, inlen, out, outlen, NULL, 0, type,
    ARGON2_VERSION_NUMBER);
    
    /*stop_cycles = rdtsc();
    stop_time = clock();*/
    
    /*delta = (stop_cycles - start_cycles) / (m_cost);
    mcycles = (double)(stop_cycles - start_cycles) / (1UL << 20);
    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);*/
    
    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &stop);
    }
    
    accum = ( (double)stop.tv_sec - start.tv_sec )
    + ( (double)stop.tv_nsec - start.tv_nsec );
    
    double mcycles = accum / (1UL << 20);
    uint64_t delta = accum / (m_cost);
    
    printf("%s %d iterations %d MiB %d threads: %2.2f cpb %2.2f "
    "Mcycles \n", argon2_type2string(type, 1), t_cost,
    m_cost >> 10, thread_n, (float)delta / 1024, mcycles);
    
    run_time = 0;
    run_time += accum / BILLION;
    
    /*run_time += accum;
    printf("%2.4f seconds\n\n", (double)run_time);*/
    }
    
    printf("%2.4f seconds\n\n", run_time);
    }
    }
    
    }
    
    int main() {
    benchmark();
    return ARGON2_OK;
    }

    The x86_64 basic test done in the previous blog shows how the program is intended to run. The program is suppose to count the amount of CPU cycles while running the program’s main code, “argon2_hash(t_cost, m_cost, thread_n, pwd_array, inlen,
    salt_array, inlen, out, outlen, NULL, 0, type, ARGON2_VERSION_NUMBER);“. I did not expect the rdstc counter found in the x86_64 architecture to be such a sophisticated problem.

    This is the portion of code that I assume did the math/ calculation of the CPU cycles:

    delta = (stop_cycles - start_cycles) / (m_cost);
    mcycles = (double)(stop_cycles - start_cycles) / (1UL << 20);
    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);

    The calculation is straight-forward of delta being the value of the stop time subtracting the start time and finally divided by the variable m_cost. m_cost is generated from the for loop seen below:

    for (m_cost = (uint32_t)1 << 10; m_cost <= (uint32_t)1 << 22; m_cost *= 2)

    My mistake:

    When looking at the original code I notice that the program had a variable that I forgot to include.

    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);

    I made the change and rebuilt the program using the Makefile.

    cc -std=c89 -O2 -Wall -g -Iinclude -Isrc -pthread src/argon2.c src/core.c src/blake2/blake2b.c src/thread.c src/encoding.c src/ref.c src/bench.c -o bench

    Here is the changed code:

    /*
    * Argon2 reference source code package - reference C implementations
    *
    * Copyright 2015
    * Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
    *
    * You may use this work under the terms of a Creative Commons CC0 1.0
    * License/Waiver or the Apache Public License 2.0, at your option. The terms of
    * these licenses can be found at:
    *
    * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
    * - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
    *
    * You should have received a copy of both of these licenses along with this
    * software. If not, they may be obtained at the above URLs.
    */
    
    #include <stdio.h>
    #include <stdint.h>
    #include <stdlib.h>
    #include <string.h>
    #include <time.h>
    #include <unistd.h>
    #define BILLION 1000000000L;
    #ifdef _MSC_VER
    #include <intrin.h>
    #endif
    
    #include "argon2.h"
    
    /*
    static uint64_t rdtsc(void) {
    #ifdef _MSC_VER
    return __rdtsc();
    #else
    #if defined(__amd64__) || defined(__x86_64__)
    uint64_t rax, rdx;
    __asm__ __volatile__("rdtsc" : "=a"(rax), "=d"(rdx) : :);
    return (rdx << 32) | rax;
    #elif defined(__i386__) || defined(__i386) || defined(__X86__)
    uint64_t rax;
    __asm__ __volatile__("rdtsc" : "=A"(rax) : :);
    return rax;
    #elif defined(__aarch64__)
    return 1;
    #else
    return 0;
    #endif
    #endif
    }
    
    */
    
    
    /*
    * Benchmarks Argon2 with salt length 16, password length 16, t_cost 3,
    and different m_cost and threads
    */
    static void benchmark() {
    #define BENCH_OUTLEN 16
    #define BENCH_INLEN 16
    const uint32_t inlen = BENCH_INLEN;
    const unsigned outlen = BENCH_OUTLEN;
    unsigned char out[BENCH_OUTLEN];
    unsigned char pwd_array[BENCH_INLEN];
    unsigned char salt_array[BENCH_INLEN];
    #undef BENCH_INLEN
    #undef BENCH_OUTLEN
    
    struct timespec start, stop;
    double accum;
    
    uint32_t t_cost = 3;
    uint32_t m_cost;
    uint32_t thread_test[4] = {1, 2, 4, 8};
    argon2_type types[3] = {Argon2_i, Argon2_d, Argon2_id};
    
    memset(pwd_array, 0, inlen);
    memset(salt_array, 1, inlen);
    
    for (m_cost = (uint32_t)1 << 10; m_cost <= (uint32_t)1 << 22; m_cost *= 2) {
    unsigned i;
    for (i = 0; i < 4; ++i) {
    double run_time = 0;
    uint32_t thread_n = thread_test[i];
    unsigned j;
    for (j = 0; j < 3; ++j) {
    /*clock_t start_time, stop_time;
    uint64_t start_cycles, stop_cycles;
    uint64_t delta;
    double mcycles;*/
    
    argon2_type type = types[j];
    
    /*start_time = clock();
    start_cycles = rdtsc();*/
    
    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &start);
    }
    
    argon2_hash(t_cost, m_cost, thread_n, pwd_array, inlen,
    salt_array, inlen, out, outlen, NULL, 0, type,
    ARGON2_VERSION_NUMBER);
    
    /*stop_cycles = rdtsc();
    stop_time = clock();*/
    
    /*delta = (stop_cycles - start_cycles) / (m_cost);
    mcycles = (double)(stop_cycles - start_cycles) / (1UL << 20);
    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);*/
    
    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
    perror( "clock gettime" ); 
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &stop);
    }
    
    accum = ( (double)stop.tv_sec - (double)start.tv_sec )
    + ( (double)stop.tv_nsec - (double)start.tv_nsec );
    
    double mcycles = accum / (1UL << 20);
    uint64_t delta = accum / (m_cost);
    
    printf("%s %d iterations %d MiB %d threads: %2.2f cpb %2.2f "
    "Mcycles \n", argon2_type2string(type, 1), t_cost,
    m_cost >> 10, thread_n, (float)delta / 1024, mcycles);
    
    run_time += accum / BILLION
    run_time += run_time / (CLOCKS_PER_SEC);
    
    /*run_time += accum;
    printf("%2.4f seconds\n\n", (double)run_time);*/
    }
    
    /*run_time = 0;
    run_time += accum / BILLION;*/
    printf("%2.4f seconds\n\n", run_time);
    }
    }
    
    }
    
    int main() {
    benchmark();
    return ARGON2_OK;
    }
    Command to run the program:
    ./bench
    Result:
    Argon2i 3 iterations 1 MiB 1 threads: 5.38 cpb 5.38 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 4.97 cpb 4.97 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 4.45 cpb 4.45 Mcycles
    0.0155 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 3.50 cpb 3.50 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 3.21 cpb 3.21 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 3.20 cpb 3.20 Mcycles
    0.0104 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 2.69 cpb 2.69 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.61 cpb 2.61 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 2.65 cpb 2.65 Mcycles
    0.0083 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 4.43 cpb 4.43 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 4.41 cpb 4.41 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 4.39 cpb 4.39 Mcycles
    0.0139 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 5.21 cpb 10.42 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 4.98 cpb 9.95 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 4.42 cpb 8.84 Mcycles
    0.0306 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 2.81 cpb 5.63 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 2.73 cpb 5.47 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 0.00 cpb -948.16 Mcycles
    -0.9826 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 1.88 cpb 3.76 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 1.90 cpb 3.80 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.88 cpb 3.76 Mcycles
    0.0119 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 2.52 cpb 5.04 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 2.54 cpb 5.08 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 2.60 cpb 5.20 Mcycles
    0.0161 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 5.29 cpb 21.18 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 4.75 cpb 19.00 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 4.43 cpb 17.72 Mcycles
    0.0607 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 2.60 cpb 10.41 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 2.57 cpb 10.27 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 2.58 cpb 10.31 Mcycles
    0.0325 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 1.61 cpb 6.42 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 1.59 cpb 6.37 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 1.60 cpb 6.39 Mcycles
    0.0201 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 2.09 cpb 8.35 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 2.06 cpb 8.25 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 2.41 cpb 9.64 Mcycles
    0.0275 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 5.52 cpb 44.13 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 5.00 cpb 40.03 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 4.61 cpb 36.90 Mcycles
    0.1269 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 2.59 cpb 20.76 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 2.57 cpb 20.56 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 2.56 cpb 20.52 Mcycles
    0.0648 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 1.48 cpb 11.85 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 1.49 cpb 11.88 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 1.48 cpb 11.84 Mcycles
    0.0373 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 2.24 cpb 17.95 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 0.00 cpb -939.59 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 2.02 cpb 16.16 Mcycles
    -0.9495 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 5.77 cpb 92.33 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 5.31 cpb 84.99 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 5.01 cpb 80.18 Mcycles
    0.2700 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 2.75 cpb 44.05 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 2.73 cpb 43.68 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 2.74 cpb 43.80 Mcycles
    0.1379 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 1.54 cpb 24.66 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 1.51 cpb 24.24 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 1.52 cpb 24.33 Mcycles
    0.0768 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 1.62 cpb 25.92 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 1.68 cpb 26.85 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 1.76 cpb 28.13 Mcycles
    0.0848 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 5.96 cpb 190.66 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 5.88 cpb 188.16 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 0.00 cpb -765.51 Mcycles
    -0.4055 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 3.29 cpb 105.24 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 3.25 cpb 104.07 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 3.26 cpb 104.20 Mcycles
    0.3287 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 1.85 cpb 59.35 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 1.84 cpb 58.92 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 1.85 cpb 59.15 Mcycles
    0.1860 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 1.92 cpb 61.44 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 1.84 cpb 58.89 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 1.99 cpb 63.67 Mcycles
    0.1929 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 0.00 cpb -564.65 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 6.02 cpb 385.31 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 0.00 cpb -567.80 Mcycles
    -0.7834 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 3.33 cpb 213.04 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 3.30 cpb 210.98 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 3.30 cpb 211.29 Mcycles
    0.6662 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 1.86 cpb 119.27 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 0.00 cpb -835.44 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 1.85 cpb 118.59 Mcycles
    -0.6266 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 1.88 cpb 120.44 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 1.94 cpb 124.37 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 1.63 cpb 104.46 Mcycles
    0.3662 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 0.00 cpb -158.98 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 0.00 cpb -167.45 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 0.00 cpb -165.81 Mcycles
    -0.5162 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 3.38 cpb 432.10 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 3.34 cpb 427.70 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 0.00 cpb -525.12 Mcycles
    0.3509 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 1.88 cpb 240.61 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 1.86 cpb 238.46 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 0.00 cpb -715.31 Mcycles
    -0.2477 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 1.56 cpb 199.22 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 1.72 cpb 219.92 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 1.69 cpb 216.88 Mcycles
    0.6669 seconds

    I will change the placement of the equations in a hope to change the results.

    Here is the changed code:
    /*
    * Argon2 reference source code package - reference C implementations
    *
    * Copyright 2015
    * Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
    *
    * You may use this work under the terms of a Creative Commons CC0 1.0
    * License/Waiver or the Apache Public License 2.0, at your option. The terms of
    * these licenses can be found at:
    *
    * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
    * - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
    *
    * You should have received a copy of both of these licenses along with this
    * software. If not, they may be obtained at the above URLs.
    */
    
    #include <stdio.h>
    #include <stdint.h>
    #include <stdlib.h>
    #include <string.h>
    #include <time.h>
    #include <unistd.h>
    #define BILLION 1000000000L;
    #ifdef _MSC_VER
    #include <intrin.h>
    #endif
    
    #include "argon2.h"
    
    /*
    static uint64_t rdtsc(void) {
    #ifdef _MSC_VER
    return __rdtsc();
    #else
    #if defined(__amd64__) || defined(__x86_64__)
    uint64_t rax, rdx;
    __asm__ __volatile__("rdtsc" : "=a"(rax), "=d"(rdx) : :);
    return (rdx << 32) | rax;
    #elif defined(__i386__) || defined(__i386) || defined(__X86__)
    uint64_t rax;
    __asm__ __volatile__("rdtsc" : "=A"(rax) : :);
    return rax;
    #elif defined(__aarch64__)
    return 1;
    #else
    return 0;
    #endif
    #endif
    }
    
    */
    
    
    /*
    * Benchmarks Argon2 with salt length 16, password length 16, t_cost 3,
    and different m_cost and threads
    */
    static void benchmark() {
    #define BENCH_OUTLEN 16
    #define BENCH_INLEN 16
    const uint32_t inlen = BENCH_INLEN;
    const unsigned outlen = BENCH_OUTLEN;
    unsigned char out[BENCH_OUTLEN];
    unsigned char pwd_array[BENCH_INLEN];
    unsigned char salt_array[BENCH_INLEN];
    #undef BENCH_INLEN
    #undef BENCH_OUTLEN
    
    struct timespec start, stop;
    double accum;
    
    uint32_t t_cost = 3;
    uint32_t m_cost;
    uint32_t thread_test[4] = {1, 2, 4, 8};
    argon2_type types[3] = {Argon2_i, Argon2_d, Argon2_id};
    
    memset(pwd_array, 0, inlen);
    memset(salt_array, 1, inlen);
    
    for (m_cost = (uint32_t)1 << 10; m_cost <= (uint32_t)1 << 22; m_cost *= 2) {
    unsigned i;
    for (i = 0; i < 4; ++i) {
    double run_time = 0;
    uint32_t thread_n = thread_test[i];
    unsigned j;
    for (j = 0; j < 3; ++j) {
    /*clock_t start_time, stop_time;
    uint64_t start_cycles, stop_cycles;
    uint64_t delta;
    double mcycles;*/
    
    argon2_type type = types[j];
    
    /*start_time = clock();
    start_cycles = rdtsc();*/
    
    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &start);
    }
    
    argon2_hash(t_cost, m_cost, thread_n, pwd_array, inlen,
    salt_array, inlen, out, outlen, NULL, 0, type,
    ARGON2_VERSION_NUMBER);
    
    /*stop_cycles = rdtsc();
    stop_time = clock();*/
    
    /*delta = (stop_cycles - start_cycles) / (m_cost);
    mcycles = (double)(stop_cycles - start_cycles) / (1UL << 20);
    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);*/
    
    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
    perror( "clock gettime" ); 
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &stop);
    }
    
    accum = ( (double)stop.tv_sec - (double)start.tv_sec )
    + ( (double)stop.tv_nsec - (double)start.tv_nsec ) / BILLION;
    
    double mcycles = accum * BILLION;
    mcycles = mcycles / (1UL << 20);
    uint64_t delta = accum * BILLION;
    delta = delta / (m_cost);
    
    printf("%s %d iterations %d MiB %d threads: %2.2f cpb %2.2f "
    "Mcycles \n", argon2_type2string(type, 1), t_cost,
    m_cost >> 10, thread_n, (float)delta / 1024, mcycles);
    
    run_time += run_time / (CLOCKS_PER_SEC);
    
    /*run_time += accum;
    printf("%2.4f seconds\n\n", (double)run_time);*/
    }
    
    printf("%2.4f seconds\n\n", run_time);
    }
    }
    
    }
    
    int main() {
    benchmark();
    return ARGON2_OK;
    }
    Here is the result:
    Argon2i 3 iterations 1 MiB 1 threads: 5.61 cpb 5.61 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 5.18 cpb 5.18 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 4.64 cpb 4.64 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 3.64 cpb 3.64 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 3.26 cpb 3.26 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 3.29 cpb 3.29 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 2.69 cpb 2.69 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.69 cpb 2.69 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 2.64 cpb 2.64 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 4.44 cpb 4.44 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 4.41 cpb 4.41 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 4.45 cpb 4.45 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 5.45 cpb 10.90 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 5.19 cpb 10.39 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 4.67 cpb 9.34 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 2.95 cpb 5.90 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 2.88 cpb 5.75 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 2.91 cpb 5.83 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 2.09 cpb 4.18 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 2.09 cpb 4.17 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.94 cpb 3.88 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 2.44 cpb 4.88 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 2.48 cpb 4.96 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 2.63 cpb 5.26 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 5.52 cpb 22.07 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 5.01 cpb 20.06 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 4.70 cpb 18.79 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 2.78 cpb 11.13 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 2.69 cpb 10.76 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 2.71 cpb 10.83 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 1.68 cpb 6.73 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 1.67 cpb 6.69 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 1.68 cpb 6.74 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 2.24 cpb 8.98 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 2.47 cpb 9.87 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 1.94 cpb 7.76 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 5.71 cpb 45.69 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 5.24 cpb 41.95 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 4.87 cpb 38.96 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 2.71 cpb 21.71 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 2.68 cpb 21.48 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 2.68 cpb 21.46 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 1.55 cpb 12.43 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 1.54 cpb 12.31 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 1.56 cpb 12.46 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 1.77 cpb 14.15 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 1.72 cpb 13.77 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 1.80 cpb 14.39 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 5.97 cpb 95.46 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 5.52 cpb 88.28 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 5.21 cpb 83.43 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 2.87 cpb 45.92 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 2.83 cpb 45.30 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 2.84 cpb 45.51 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 1.59 cpb 25.43 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 1.57 cpb 25.17 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 1.58 cpb 25.32 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 1.92 cpb 30.72 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 1.71 cpb 27.37 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 1.78 cpb 28.47 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 6.19 cpb 198.09 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 6.10 cpb 195.33 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 6.11 cpb 195.65 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 3.39 cpb 108.50 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 3.36 cpb 107.50 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 3.36 cpb 107.38 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 1.91 cpb 61.22 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 1.90 cpb 60.79 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 1.90 cpb 60.86 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 1.90 cpb 60.93 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 1.90 cpb 60.83 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 1.97 cpb 62.99 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 6.32 cpb 404.43 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 6.23 cpb 398.94 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 6.24 cpb 399.53 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 3.45 cpb 220.50 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 3.41 cpb 218.07 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 3.42 cpb 218.96 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 1.92 cpb 123.16 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 1.91 cpb 122.17 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 1.91 cpb 122.42 Mcycles
    0.0000 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 1.82 cpb 116.25 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 1.84 cpb 117.60 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 1.87 cpb 119.54 Mcycles
    0.0000 seconds

    The result did show positive numbers for Mcycles but I accidentally removed the equation to calculate the time at the end. I will fix that now.

    Here is the changed code:
    /*
    * Argon2 reference source code package - reference C implementations
    *
    * Copyright 2015
    * Daniel Dinu, Dmitry Khovratovich, Jean-Philippe Aumasson, and Samuel Neves
    *
    * You may use this work under the terms of a Creative Commons CC0 1.0
    * License/Waiver or the Apache Public License 2.0, at your option. The terms of
    * these licenses can be found at:
    *
    * - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
    * - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
    *
    * You should have received a copy of both of these licenses along with this
    * software. If not, they may be obtained at the above URLs.
    */
    
    #include <stdio.h>
    #include <stdint.h>
    #include <stdlib.h>
    #include <string.h>
    #include <time.h>
    #include <unistd.h>
    #define BILLION 1000000000L;
    #ifdef _MSC_VER
    #include <intrin.h>
    #endif
    
    #include "argon2.h"
    
    /*
    static uint64_t rdtsc(void) {
    #ifdef _MSC_VER
    return __rdtsc();
    #else
    #if defined(__amd64__) || defined(__x86_64__)
    uint64_t rax, rdx;
    __asm__ __volatile__("rdtsc" : "=a"(rax), "=d"(rdx) : :);
    return (rdx << 32) | rax;
    #elif defined(__i386__) || defined(__i386) || defined(__X86__)
    uint64_t rax;
    __asm__ __volatile__("rdtsc" : "=A"(rax) : :);
    return rax;
    #elif defined(__aarch64__)
    return 1;
    #else
    return 0;
    #endif
    #endif
    }
    
    */
    
    
    /*
    * Benchmarks Argon2 with salt length 16, password length 16, t_cost 3,
    and different m_cost and threads
    */
    static void benchmark() {
    #define BENCH_OUTLEN 16
    #define BENCH_INLEN 16
    const uint32_t inlen = BENCH_INLEN;
    const unsigned outlen = BENCH_OUTLEN;
    unsigned char out[BENCH_OUTLEN];
    unsigned char pwd_array[BENCH_INLEN];
    unsigned char salt_array[BENCH_INLEN];
    #undef BENCH_INLEN
    #undef BENCH_OUTLEN
    
    struct timespec start, stop;
    double accum;
    
    uint32_t t_cost = 3;
    uint32_t m_cost;
    uint32_t thread_test[4] = {1, 2, 4, 8};
    argon2_type types[3] = {Argon2_i, Argon2_d, Argon2_id};
    
    memset(pwd_array, 0, inlen);
    memset(salt_array, 1, inlen);
    
    for (m_cost = (uint32_t)1 << 10; m_cost <= (uint32_t)1 << 22; m_cost *= 2) {
    unsigned i;
    for (i = 0; i < 4; ++i) {
    double run_time = 0;
    uint32_t thread_n = thread_test[i];
    unsigned j;
    for (j = 0; j < 3; ++j) {
    /*clock_t start_time, stop_time;
    uint64_t start_cycles, stop_cycles;
    uint64_t delta;
    double mcycles;*/
    
    argon2_type type = types[j];
    
    /*start_time = clock();
    start_cycles = rdtsc();*/
    
    if( clock_gettime( CLOCK_REALTIME, &start) == -1 ) {
    perror( "clock gettime" );
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &start);
    }
    
    argon2_hash(t_cost, m_cost, thread_n, pwd_array, inlen,
    salt_array, inlen, out, outlen, NULL, 0, type,
    ARGON2_VERSION_NUMBER);
    
    /*stop_cycles = rdtsc();
    stop_time = clock();*/
    
    /*delta = (stop_cycles - start_cycles) / (m_cost);
    mcycles = (double)(stop_cycles - start_cycles) / (1UL << 20);
    run_time += ((double)stop_time - start_time) / (CLOCKS_PER_SEC);*/
    
    if( clock_gettime( CLOCK_REALTIME, &stop) == -1 ) {
    perror( "clock gettime" ); 
    exit( EXIT_FAILURE );
    }
    else
    {
    clock_gettime(CLOCK_REALTIME, &stop);
    }
    
    accum = ( (double)stop.tv_sec - (double)start.tv_sec )
    + ( (double)stop.tv_nsec - (double)start.tv_nsec ) / BILLION;
    
    double mcycles = accum * BILLION;
    mcycles = mcycles / (1UL << 20);
    uint64_t delta = accum * BILLION;
    delta = delta / (m_cost);
    
    printf("%s %d iterations %d MiB %d threads: %2.2f cpb %2.2f "
    "Mcycles \n", argon2_type2string(type, 1), t_cost,
    m_cost >> 10, thread_n, (float)delta / 1024, mcycles);
    
    run_time += run_time / (CLOCKS_PER_SEC);
    
    /*run_time += accum;
    printf("%2.4f seconds\n\n", (double)run_time);*/
    }
    
    /*run_time = 0;*/
    run_time += accum;
    printf("%2.4f seconds\n\n", run_time);
    }
    }
    
    }
    
    int main() {
    benchmark();
    return ARGON2_OK;
    }

    Hopefully it works now.

    Rebuild and test.

    Result:
    Argon2i 3 iterations 1 MiB 1 threads: 5.24 cpb 5.24 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 4.89 cpb 4.90 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 4.40 cpb 4.40 Mcycles
    0.0046 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 3.46 cpb 3.46 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 3.13 cpb 3.13 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 3.16 cpb 3.16 Mcycles
    0.0033 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 2.65 cpb 2.65 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.58 cpb 2.58 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 2.61 cpb 2.61 Mcycles
    0.0027 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 4.36 cpb 4.36 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 4.27 cpb 4.27 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 4.25 cpb 4.25 Mcycles
    0.0045 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 5.20 cpb 10.41 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 4.93 cpb 9.86 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 4.41 cpb 8.82 Mcycles
    0.0092 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 2.83 cpb 5.65 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 2.72 cpb 5.44 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 2.73 cpb 5.47 Mcycles
    0.0057 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 1.87 cpb 3.73 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 1.99 cpb 3.98 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.87 cpb 3.74 Mcycles
    0.0039 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 2.46 cpb 4.93 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 2.52 cpb 5.05 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 2.55 cpb 5.10 Mcycles
    0.0053 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 5.28 cpb 21.11 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 4.80 cpb 19.21 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 4.56 cpb 18.22 Mcycles
    0.0191 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 2.67 cpb 10.66 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 2.56 cpb 10.25 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 2.57 cpb 10.27 Mcycles
    0.0108 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 1.61 cpb 6.42 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 1.57 cpb 6.29 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 2.26 cpb 9.03 Mcycles
    0.0095 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 2.43 cpb 9.74 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 1.99 cpb 7.95 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 2.15 cpb 8.61 Mcycles
    0.0090 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 5.50 cpb 43.97 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 5.06 cpb 40.49 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 4.63 cpb 37.06 Mcycles
    0.0389 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 2.62 cpb 20.97 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 2.56 cpb 20.48 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 2.57 cpb 20.53 Mcycles
    0.0215 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 1.49 cpb 11.91 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 1.46 cpb 11.69 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 1.47 cpb 11.74 Mcycles
    0.0123 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 1.96 cpb 15.66 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 1.73 cpb 13.82 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 1.86 cpb 14.86 Mcycles
    0.0156 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 5.75 cpb 92.08 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 5.29 cpb 84.71 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 5.01 cpb 80.20 Mcycles
    0.0841 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 2.75 cpb 44.01 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 2.73 cpb 43.66 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 2.72 cpb 43.55 Mcycles
    0.0457 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 1.52 cpb 24.39 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 1.50 cpb 24.08 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 1.51 cpb 24.14 Mcycles
    0.0253 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 1.70 cpb 27.21 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 1.67 cpb 26.80 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 1.70 cpb 27.21 Mcycles
    0.0285 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 5.93 cpb 189.81 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 5.88 cpb 188.10 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 5.86 cpb 187.57 Mcycles
    0.1967 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 3.29 cpb 105.13 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 3.25 cpb 103.96 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 3.25 cpb 104.06 Mcycles
    0.1091 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 1.85 cpb 59.28 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 1.84 cpb 58.83 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 1.84 cpb 58.88 Mcycles
    0.0617 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 1.82 cpb 58.35 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 1.99 cpb 63.75 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 1.88 cpb 60.21 Mcycles
    0.0631 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 6.07 cpb 388.65 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 6.01 cpb 384.52 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 6.02 cpb 385.18 Mcycles
    0.4039 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 3.34 cpb 213.63 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 3.30 cpb 211.42 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 3.30 cpb 211.20 Mcycles
    0.2215 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 1.87 cpb 119.59 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 1.84 cpb 118.12 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 1.85 cpb 118.15 Mcycles
    0.1239 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 1.74 cpb 111.63 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 1.76 cpb 112.49 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 1.85 cpb 118.57 Mcycles
    0.1243 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 6.20 cpb 793.29 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 6.14 cpb 785.44 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 6.14 cpb 786.33 Mcycles
    0.8245 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 3.38 cpb 432.51 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 3.35 cpb 428.33 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 3.35 cpb 428.92 Mcycles
    0.4498 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 1.88 cpb 240.65 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 1.86 cpb 238.37 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 1.86 cpb 238.47 Mcycles
    0.2501 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 1.60 cpb 205.20 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 1.71 cpb 218.40 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 1.77 cpb 227.16 Mcycles
    0.2382 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 6.30 cpb 1611.99 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 6.24 cpb 1597.32 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 6.25 cpb 1600.12 Mcycles
    1.6778 seconds
    
    Argon2i 3 iterations 256 MiB 2 threads: 3.42 cpb 874.77 Mcycles
    Argon2d 3 iterations 256 MiB 2 threads: 3.39 cpb 867.53 Mcycles
    Argon2id 3 iterations 256 MiB 2 threads: 3.39 cpb 868.38 Mcycles
    0.9106 seconds
    
    Argon2i 3 iterations 256 MiB 4 threads: 1.92 cpb 491.15 Mcycles
    Argon2d 3 iterations 256 MiB 4 threads: 1.88 cpb 481.03 Mcycles
    Argon2id 3 iterations 256 MiB 4 threads: 1.89 cpb 484.98 Mcycles
    0.5085 seconds
    
    Argon2i 3 iterations 256 MiB 8 threads: 1.44 cpb 369.10 Mcycles
    Argon2d 3 iterations 256 MiB 8 threads: 1.63 cpb 418.42 Mcycles
    Argon2id 3 iterations 256 MiB 8 threads: 1.67 cpb 428.07 Mcycles
    0.4489 seconds

    The results seem successful. I will try again but with optimization level -O3 for the GNU gcc compiler flag option.

    I can change the option by using Vim Editor.

    command:
    vi Makefile

    I will change the following line:

    CFLAGS += -std=c89 -O2 -Wall -g -Iinclude -Isrc

    The change will look like this:

    CFLAGS += -std=c89 -O3 -Wall -g -Iinclude -Isrc

    I will save the file with the new changes and rebuild the program to test it.

    command:
    make bench
    Result:
    Argon2i 3 iterations 1 MiB 1 threads: 4.80 cpb 4.80 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 4.52 cpb 4.52 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 3.96 cpb 3.96 Mcycles
    0.0042 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 3.33 cpb 3.33 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 2.92 cpb 2.92 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 2.91 cpb 2.91 Mcycles
    0.0031 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 2.46 cpb 2.46 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 2.43 cpb 2.43 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 2.48 cpb 2.48 Mcycles
    0.0026 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 4.52 cpb 4.52 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 4.39 cpb 4.39 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 4.33 cpb 4.33 Mcycles
    0.0045 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 4.79 cpb 9.57 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 4.52 cpb 9.04 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 4.00 cpb 8.00 Mcycles
    0.0084 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 2.62 cpb 5.25 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 2.58 cpb 5.17 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 2.59 cpb 5.18 Mcycles
    0.0054 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 1.85 cpb 3.69 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 1.85 cpb 3.70 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.77 cpb 3.53 Mcycles
    0.0037 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 2.31 cpb 4.62 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 2.42 cpb 4.84 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 2.46 cpb 4.93 Mcycles
    0.0052 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 4.87 cpb 19.47 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 4.39 cpb 17.55 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 4.03 cpb 16.11 Mcycles
    0.0169 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 2.45 cpb 9.81 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 2.40 cpb 9.61 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 2.39 cpb 9.56 Mcycles
    0.0100 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 1.48 cpb 5.93 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 1.47 cpb 5.87 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 1.50 cpb 5.98 Mcycles
    0.0063 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 2.21 cpb 8.84 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 2.05 cpb 8.19 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 2.13 cpb 8.53 Mcycles
    0.0089 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 5.14 cpb 41.16 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 4.62 cpb 36.95 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 4.23 cpb 33.81 Mcycles
    0.0355 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 2.42 cpb 19.33 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 2.38 cpb 19.03 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 2.38 cpb 19.03 Mcycles
    0.0200 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 1.38 cpb 11.09 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 1.38 cpb 11.00 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 1.38 cpb 11.07 Mcycles
    0.0116 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 1.73 cpb 13.88 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 1.81 cpb 14.47 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 1.90 cpb 15.24 Mcycles
    0.0160 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 5.39 cpb 86.31 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 4.93 cpb 78.84 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 4.66 cpb 74.55 Mcycles
    0.0782 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 2.59 cpb 41.41 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 2.56 cpb 40.95 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 2.57 cpb 41.09 Mcycles
    0.0431 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 1.47 cpb 23.47 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 1.46 cpb 23.35 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 1.44 cpb 23.05 Mcycles
    0.0242 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 1.69 cpb 27.07 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 1.71 cpb 27.36 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 1.60 cpb 25.60 Mcycles
    0.0268 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 5.56 cpb 178.05 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 5.48 cpb 175.31 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 5.49 cpb 175.62 Mcycles
    0.1841 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 3.10 cpb 99.33 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 3.07 cpb 98.24 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 3.07 cpb 98.39 Mcycles
    0.1032 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 1.78 cpb 56.83 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 1.76 cpb 56.34 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 1.76 cpb 56.46 Mcycles
    0.0592 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 1.80 cpb 57.72 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 1.75 cpb 56.17 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 1.80 cpb 57.75 Mcycles
    0.0606 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 5.69 cpb 364.37 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 5.63 cpb 360.52 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 5.64 cpb 361.19 Mcycles
    0.3787 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 3.17 cpb 203.00 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 3.14 cpb 200.72 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 3.14 cpb 201.11 Mcycles
    0.2109 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 1.79 cpb 114.35 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 1.77 cpb 113.36 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 1.78 cpb 114.01 Mcycles
    0.1195 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 1.69 cpb 108.44 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 1.72 cpb 109.93 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 1.70 cpb 108.90 Mcycles
    0.1142 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 5.81 cpb 743.61 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 5.76 cpb 737.17 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 5.76 cpb 737.74 Mcycles
    0.7736 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 3.23 cpb 413.39 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 3.20 cpb 409.93 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 3.20 cpb 410.16 Mcycles
    0.4301 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 1.80 cpb 230.53 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 1.79 cpb 228.66 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 1.78 cpb 228.44 Mcycles
    0.2395 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 1.69 cpb 216.05 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 1.62 cpb 207.76 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 1.65 cpb 211.43 Mcycles
    0.2217 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 5.93 cpb 1517.87 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 5.87 cpb 1503.31 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 5.88 cpb 1505.68 Mcycles
    1.5788 seconds
    
    Argon2i 3 iterations 256 MiB 2 threads: 3.27 cpb 838.35 Mcycles
    Argon2d 3 iterations 256 MiB 2 threads: 3.25 cpb 831.07 Mcycles
    Argon2id 3 iterations 256 MiB 2 threads: 3.25 cpb 831.79 Mcycles
    0.8722 seconds
    
    Argon2i 3 iterations 256 MiB 4 threads: 1.81 cpb 464.17 Mcycles
    Argon2d 3 iterations 256 MiB 4 threads: 1.81 cpb 463.87 Mcycles
    Argon2id 3 iterations 256 MiB 4 threads: 1.80 cpb 461.07 Mcycles
    0.4835 seconds
    
    Argon2i 3 iterations 256 MiB 8 threads: 1.53 cpb 390.76 Mcycles
    Argon2d 3 iterations 256 MiB 8 threads: 1.59 cpb 406.13 Mcycles
    Argon2id 3 iterations 256 MiB 8 threads: 1.60 cpb 409.85 Mcycles
    0.4298 seconds

    This seems like the tests were quite similar to the optimization level -O2. This could be from the additional writing of variables into memory.

    Test 2(x86_64):

    I will try the changed code on machine 2.

    This machine as mentioned before has these specifications:

    Machine 2:

    Intel(R) Xeon(R) CPU E5-1630 v4 @ 3.70GHz

    Four sticks of 8GB DIMM DDR4 RAM at 2.4 GHz (32 GB of RAM in total)

    x86_64 Fedora 28 version of Linux Operating System

    I will do the test with optimization level -O2 for testing.

    Compile the program:

    cc -std=c89 -O2 -Wall -g -Iinclude -Isrc -pthread -march=native src/argon2.c src/core.c src/blake2/blake2b.c src/thread.c src/encoding.c src/opt.c src/bench.c -o bench
    Result:
    Argon2i 3 iterations 1 MiB 1 threads: 3.54 cpb 3.54 Mcycles
    Argon2d 3 iterations 1 MiB 1 threads: 3.20 cpb 3.20 Mcycles
    Argon2id 3 iterations 1 MiB 1 threads: 2.73 cpb 2.73 Mcycles
    0.0029 seconds
    
    Argon2i 3 iterations 1 MiB 2 threads: 2.92 cpb 2.92 Mcycles
    Argon2d 3 iterations 1 MiB 2 threads: 2.34 cpb 2.34 Mcycles
    Argon2id 3 iterations 1 MiB 2 threads: 2.40 cpb 2.40 Mcycles
    0.0025 seconds
    
    Argon2i 3 iterations 1 MiB 4 threads: 1.97 cpb 1.97 Mcycles
    Argon2d 3 iterations 1 MiB 4 threads: 1.87 cpb 1.87 Mcycles
    Argon2id 3 iterations 1 MiB 4 threads: 1.94 cpb 1.94 Mcycles
    0.0020 seconds
    
    Argon2i 3 iterations 1 MiB 8 threads: 3.21 cpb 3.21 Mcycles
    Argon2d 3 iterations 1 MiB 8 threads: 3.00 cpb 3.00 Mcycles
    Argon2id 3 iterations 1 MiB 8 threads: 2.81 cpb 2.81 Mcycles
    0.0030 seconds
    
    Argon2i 3 iterations 2 MiB 1 threads: 1.40 cpb 2.79 Mcycles
    Argon2d 3 iterations 2 MiB 1 threads: 1.21 cpb 2.42 Mcycles
    Argon2id 3 iterations 2 MiB 1 threads: 1.04 cpb 2.08 Mcycles
    0.0022 seconds
    
    Argon2i 3 iterations 2 MiB 2 threads: 1.44 cpb 2.88 Mcycles
    Argon2d 3 iterations 2 MiB 2 threads: 1.36 cpb 2.72 Mcycles
    Argon2id 3 iterations 2 MiB 2 threads: 1.37 cpb 2.73 Mcycles
    0.0029 seconds
    
    Argon2i 3 iterations 2 MiB 4 threads: 0.99 cpb 1.99 Mcycles
    Argon2d 3 iterations 2 MiB 4 threads: 1.11 cpb 2.21 Mcycles
    Argon2id 3 iterations 2 MiB 4 threads: 1.05 cpb 2.11 Mcycles
    0.0022 seconds
    
    Argon2i 3 iterations 2 MiB 8 threads: 1.67 cpb 3.35 Mcycles
    Argon2d 3 iterations 2 MiB 8 threads: 1.54 cpb 3.08 Mcycles
    Argon2id 3 iterations 2 MiB 8 threads: 1.51 cpb 3.02 Mcycles
    0.0032 seconds
    
    Argon2i 3 iterations 4 MiB 1 threads: 1.41 cpb 5.65 Mcycles
    Argon2d 3 iterations 4 MiB 1 threads: 1.09 cpb 4.38 Mcycles
    Argon2id 3 iterations 4 MiB 1 threads: 0.98 cpb 3.92 Mcycles
    0.0041 seconds
    
    Argon2i 3 iterations 4 MiB 2 threads: 1.28 cpb 5.13 Mcycles
    Argon2d 3 iterations 4 MiB 2 threads: 1.21 cpb 4.85 Mcycles
    Argon2id 3 iterations 4 MiB 2 threads: 1.23 cpb 4.93 Mcycles
    0.0052 seconds
    
    Argon2i 3 iterations 4 MiB 4 threads: 0.79 cpb 3.18 Mcycles
    Argon2d 3 iterations 4 MiB 4 threads: 0.79 cpb 3.18 Mcycles
    Argon2id 3 iterations 4 MiB 4 threads: 0.81 cpb 3.22 Mcycles
    0.0034 seconds
    
    Argon2i 3 iterations 4 MiB 8 threads: 1.00 cpb 4.00 Mcycles
    Argon2d 3 iterations 4 MiB 8 threads: 0.89 cpb 3.58 Mcycles
    Argon2id 3 iterations 4 MiB 8 threads: 0.91 cpb 3.64 Mcycles
    0.0038 seconds
    
    Argon2i 3 iterations 8 MiB 1 threads: 1.47 cpb 11.79 Mcycles
    Argon2d 3 iterations 8 MiB 1 threads: 1.13 cpb 9.08 Mcycles
    Argon2id 3 iterations 8 MiB 1 threads: 0.97 cpb 7.80 Mcycles
    0.0082 seconds
    
    Argon2i 3 iterations 8 MiB 2 threads: 1.27 cpb 10.18 Mcycles
    Argon2d 3 iterations 8 MiB 2 threads: 0.87 cpb 6.95 Mcycles
    Argon2id 3 iterations 8 MiB 2 threads: 0.88 cpb 7.00 Mcycles
    0.0073 seconds
    
    Argon2i 3 iterations 8 MiB 4 threads: 0.91 cpb 7.31 Mcycles
    Argon2d 3 iterations 8 MiB 4 threads: 0.80 cpb 6.42 Mcycles
    Argon2id 3 iterations 8 MiB 4 threads: 0.59 cpb 4.70 Mcycles
    0.0049 seconds
    
    Argon2i 3 iterations 8 MiB 8 threads: 0.82 cpb 6.53 Mcycles
    Argon2d 3 iterations 8 MiB 8 threads: 0.83 cpb 6.63 Mcycles
    Argon2id 3 iterations 8 MiB 8 threads: 0.81 cpb 6.47 Mcycles
    0.0068 seconds
    
    Argon2i 3 iterations 16 MiB 1 threads: 1.89 cpb 30.20 Mcycles
    Argon2d 3 iterations 16 MiB 1 threads: 1.33 cpb 21.22 Mcycles
    Argon2id 3 iterations 16 MiB 1 threads: 1.17 cpb 18.70 Mcycles
    0.0196 seconds
    
    Argon2i 3 iterations 16 MiB 2 threads: 1.17 cpb 18.80 Mcycles
    Argon2d 3 iterations 16 MiB 2 threads: 0.81 cpb 13.03 Mcycles
    Argon2id 3 iterations 16 MiB 2 threads: 0.79 cpb 12.57 Mcycles
    0.0132 seconds
    
    Argon2i 3 iterations 16 MiB 4 threads: 0.80 cpb 12.79 Mcycles
    Argon2d 3 iterations 16 MiB 4 threads: 0.56 cpb 8.97 Mcycles
    Argon2id 3 iterations 16 MiB 4 threads: 0.53 cpb 8.45 Mcycles
    0.0089 seconds
    
    Argon2i 3 iterations 16 MiB 8 threads: 0.60 cpb 9.57 Mcycles
    Argon2d 3 iterations 16 MiB 8 threads: 0.64 cpb 10.22 Mcycles
    Argon2id 3 iterations 16 MiB 8 threads: 0.68 cpb 10.83 Mcycles
    0.0114 seconds
    
    Argon2i 3 iterations 32 MiB 1 threads: 1.64 cpb 52.53 Mcycles
    Argon2d 3 iterations 32 MiB 1 threads: 1.50 cpb 47.89 Mcycles
    Argon2id 3 iterations 32 MiB 1 threads: 1.49 cpb 47.84 Mcycles
    0.0502 seconds
    
    Argon2i 3 iterations 32 MiB 2 threads: 1.28 cpb 41.08 Mcycles
    Argon2d 3 iterations 32 MiB 2 threads: 1.29 cpb 41.17 Mcycles
    Argon2id 3 iterations 32 MiB 2 threads: 1.38 cpb 44.31 Mcycles
    0.0465 seconds
    
    Argon2i 3 iterations 32 MiB 4 threads: 0.86 cpb 27.46 Mcycles
    Argon2d 3 iterations 32 MiB 4 threads: 0.74 cpb 23.58 Mcycles
    Argon2id 3 iterations 32 MiB 4 threads: 0.65 cpb 20.68 Mcycles
    0.0217 seconds
    
    Argon2i 3 iterations 32 MiB 8 threads: 0.68 cpb 21.81 Mcycles
    Argon2d 3 iterations 32 MiB 8 threads: 0.69 cpb 22.09 Mcycles
    Argon2id 3 iterations 32 MiB 8 threads: 0.68 cpb 21.73 Mcycles
    0.0228 seconds
    
    Argon2i 3 iterations 64 MiB 1 threads: 1.61 cpb 103.11 Mcycles
    Argon2d 3 iterations 64 MiB 1 threads: 1.58 cpb 101.05 Mcycles
    Argon2id 3 iterations 64 MiB 1 threads: 1.58 cpb 101.25 Mcycles
    0.1062 seconds
    
    Argon2i 3 iterations 64 MiB 2 threads: 1.44 cpb 92.42 Mcycles
    Argon2d 3 iterations 64 MiB 2 threads: 1.18 cpb 75.76 Mcycles
    Argon2id 3 iterations 64 MiB 2 threads: 1.18 cpb 75.28 Mcycles
    0.0789 seconds
    
    Argon2i 3 iterations 64 MiB 4 threads: 0.76 cpb 48.48 Mcycles
    Argon2d 3 iterations 64 MiB 4 threads: 0.65 cpb 41.49 Mcycles
    Argon2id 3 iterations 64 MiB 4 threads: 0.63 cpb 40.49 Mcycles
    0.0425 seconds
    
    Argon2i 3 iterations 64 MiB 8 threads: 0.58 cpb 37.08 Mcycles
    Argon2d 3 iterations 64 MiB 8 threads: 0.61 cpb 38.88 Mcycles
    Argon2id 3 iterations 64 MiB 8 threads: 0.61 cpb 39.02 Mcycles
    0.0409 seconds
    
    Argon2i 3 iterations 128 MiB 1 threads: 1.72 cpb 220.68 Mcycles
    Argon2d 3 iterations 128 MiB 1 threads: 1.65 cpb 211.20 Mcycles
    Argon2id 3 iterations 128 MiB 1 threads: 1.61 cpb 206.66 Mcycles
    0.2167 seconds
    
    Argon2i 3 iterations 128 MiB 2 threads: 1.12 cpb 143.16 Mcycles
    Argon2d 3 iterations 128 MiB 2 threads: 1.11 cpb 142.53 Mcycles
    Argon2id 3 iterations 128 MiB 2 threads: 1.11 cpb 142.67 Mcycles
    0.1496 seconds
    
    Argon2i 3 iterations 128 MiB 4 threads: 0.68 cpb 87.52 Mcycles
    Argon2d 3 iterations 128 MiB 4 threads: 0.68 cpb 86.96 Mcycles
    Argon2id 3 iterations 128 MiB 4 threads: 0.68 cpb 86.78 Mcycles
    0.0910 seconds
    
    Argon2i 3 iterations 128 MiB 8 threads: 0.59 cpb 75.56 Mcycles
    Argon2d 3 iterations 128 MiB 8 threads: 0.55 cpb 70.96 Mcycles
    Argon2id 3 iterations 128 MiB 8 threads: 0.58 cpb 74.02 Mcycles
    0.0776 seconds
    
    Argon2i 3 iterations 256 MiB 1 threads: 1.75 cpb 447.73 Mcycles
    Argon2d 3 iterations 256 MiB 1 threads: 1.62 cpb 414.48 Mcycles
    Argon2id 3 iterations 256 MiB 1 threads: 1.62 cpb 415.25 Mcycles
    0.4354 seconds
    
    Argon2i 3 iterations 256 MiB 2 threads: 1.17 cpb 299.72 Mcycles
    Argon2d 3 iterations 256 MiB 2 threads: 1.07 cpb 274.17 Mcycles
    Argon2id 3 iterations 256 MiB 2 threads: 1.14 cpb 291.48 Mcycles
    0.3056 seconds
    
    Argon2i 3 iterations 256 MiB 4 threads: 0.70 cpb 180.25 Mcycles
    Argon2d 3 iterations 256 MiB 4 threads: 0.71 cpb 182.79 Mcycles
    Argon2id 3 iterations 256 MiB 4 threads: 0.70 cpb 180.23 Mcycles
    0.1890 seconds
    
    Argon2i 3 iterations 256 MiB 8 threads: 0.54 cpb 137.75 Mcycles
    Argon2d 3 iterations 256 MiB 8 threads: 0.54 cpb 139.23 Mcycles
    Argon2id 3 iterations 256 MiB 8 threads: 0.53 cpb 134.82 Mcycles
    0.1414 seconds

    (This blog is getting too long. I will continue in Project: Part3 – Optimizing and porting argon2 package using C and Assembler language(Progress 4))

     

    by dcchen at December 12, 2018 12:16 AM

    December 11, 2018


    Nathan Misener

    SPO600 Final Part 3: Implimentation

    For part three of this project I am altering the code in brotli. As discussed in the previous blog I was going to work on the code in utf8_util.c file. We will be focusing on this method only.


    static size_t BrotliParseAsUTF8(
    	    int* symbol, const uint8_t* input, size_t size) {
    	  /* ASCII */
    	  if ((input[0] & 0x80) == 0) {
    	    *symbol = input[0];
    	    if (*symbol &gt; 0) {
    	      return 1;
    	    }
    	  }
    	  /* 2-byte UTF8 */
    	  if (size &gt; 1u &&
    	      (input[0] & 0xE0) == 0xC0 &&
    	      (input[1] & 0xC0) == 0x80) {
    	    *symbol = (((input[0] & 0x1F) &lt;&lt; 6) |
    	               (input[1] & 0x3F));
    	    if (*symbol &gt; 0x7F) {
    	      return 2;
    	    }
    	  }
    	  /* 3-byte UFT8 */
    	  if (size &gt; 2u &&
    	      (input[0] & 0xF0) == 0xE0 &&
    	      (input[1] & 0xC0) == 0x80 &&
    	      (input[2] & 0xC0) == 0x80) {
    	    *symbol = (((input[0] & 0x0F) &lt;&lt; 12) |
    	               ((input[1] & 0x3F) &lt;&lt; 6) |
    	               (input[2] & 0x3F));
    	    if (*symbol &gt; 0x7FF) {
    	      return 3;
    	    }
    	  }
    	  /* 4-byte UFT8 */
    	  if (size &gt; 3u &&
    	      (input[0] & 0xF8) == 0xF0 &&
    	      (input[1] & 0xC0) == 0x80 &&
    	      (input[2] & 0xC0) == 0x80 &&
    	      (input[3] & 0xC0) == 0x80) {
    	    *symbol = (((input[0] & 0x07) &lt;&lt; 18) |
    	               ((input[1] & 0x3F) &lt;&lt; 12) |
    	               ((input[2] & 0x3F) &lt;&lt; 6) |
    	               (input[3] & 0x3F));
    	    if (*symbol &gt; 0xFFFF && *symbol &lt;= 0x10FFFF) {
    	      return 4;
    	    }
    	  }
    	  /* Not UTF8, emit a special symbol above the UTF8-code space */
    	  *symbol = 0x110000 | input[0];
    	  return 1;
    	}

    The change was to remove redundant size checks to see what size of input we are compressing. The idea was to remove the code that rebuilds the “symbol” being read. The only reason we are rebuilding the symbol was to check whether it is 1, 2, 3 or 4 bytes in size. We don’t actually care what the symbol is, and we aren’t using it anywhere else except in the function BrotliIsMostlyUTF8().
    This is the only place where we use it is here.

    if (symbol < 0x110000) size_utf8 += bytes_read;
    So, we really don’t care what the symbol is, just the size. If that’s the case we can remove a whole bunch of code, and that should make it faster.
    After my first go at it, it looked like this:

    static size_t BrotliParseAsUTF8(
        int* symbol, const uint8_t* input, size_t size) {
      /* ASCII */
      if ((input[0] & 0x80) == 0) {
        *symbol = input[0];
        if (*symbol &gt; 0) {
          return 1;
        }
      }
      /* 2-byte UTF8 */
      if (size &gt; 1u &&   //Xu u stands for unsigned 
          (input[0] & 0xE0) == 0xC0 &&  //0xE0 =224, 0xC0= 192, 0x80 = 128,
          (input[1] & 0xC0) == 0x80) {
        *symbol = 0x80;  // 0x3F = 63
        //if (*symbol &gt; 0x7F) { //0x7F=127
          return 2;
        //}
      }
      /* 3-byte UFT8 */
      if (size &gt; 2u &&
          (input[0] & 0xF0) == 0xE0 &&  //0xF0 =240,
          (input[1] & 0xC0) == 0x80 &&
          (input[2] & 0xC0) == 0x80) {
        *symbol = 0x800;
       // if (*symbol &gt; 0x7FF) { //0x7FF= 2047
          return 3;
       // }
      }
      /* 4-byte UFT8 */
      if (size &gt; 3u &&
          (input[0] & 0xF8) == 0xF0 && //0xF8 = 248
          (input[1] & 0xC0) == 0x80 &&
          (input[2] & 0xC0) == 0x80 &&
          (input[3] & 0xC0) == 0x80) {
        *symbol = 0x10000;
       // if (*symbol &gt; 0xFFFF && *symbol &lt;= 0x10FFFF) { //0xFFFF = 65535, 0x10FFFF = 1114111
          return 4;
        //}
      }
      /* Not UTF8, emit a special symbol above the UTF8-code space */
      *symbol = 0x110000 | input[0]; //0x110000 = 1114112
      return 1;
    }

    What was done was I removed the rebuilding of the symbol and then I assigned the symbol a value that would normally be caught in the “if” statement to return the corresponding number of bytes.
    Testing results.
    On average, the command took longer to compress with the new code. I was using a best of 3 runs using
    time brotli <../filename.txt>/dev/null
    brotli results for first pass
    The way my chart works is that it’s broken down by the machines I’m working on, the time and the percent of time increased. A negative percentage is good as it means we’ve deceased the time it took to run.
    MissNume.txt the first file I ran is a 21MB text file where as UTF8Test.txt is a small file with many special characters.
    The first run I did yielded some weird and poor results. With my altered both CCharlie and Yaggi had quite a slow down. I was wondering if I could cut the code down even more.
    My second iteration was this:


    static size_t BrotliParseAsUTF8(
        int* symbol, const uint8_t* input, size_t size) {
      /* ASCII */
      if ((input[0] & 0x80) == 0) {
        *symbol = input[0];
        if (*symbol > 0) {
          return 1;
        }
      }
      /* 2-byte UTF8 */
      if (size > 1u) {
        *symbol = 0x80; 
          return 2;
      }
      /* 3-byte UFT8 */
      if (size > 2u) {
        *symbol = 0x800;
          return 3;
      }
      /* 4-byte UFT8 */
      if (size > 3u ) {
        *symbol = 0x10000;
    	return 4;
      }
      /* Not UTF8, emit a special symbol above the UTF8-code space */
      *symbol = 0x110000 | input[0];
      return 1;
    }

    I’ve stripped out the rebuilding of the symbol and removed the checking of the input. They seemed like redundant checks anyways as we checked the length that we were passing through anyways.
    To make sure this code worked, I checked both the file size and a comparison after they both have been compressed using the diff -s command. Both they matched in size and were Identical so that was a good sign. I ran it through their make tests and it passed all 70, another good sign.
    Now for the results.
    brotli results for second pass
    It was better, but the results weren’t great. Overall there is an improvement in runtime, but its nothing major.

    It’s odd, the perf benchmarking stated that this check was a hotspot for the system, yet my performance is pretty close to being the same.
    One thing I’m curious about is the read/writing to files. I feel like that would take most of the runtime, and that is not much we can do to improve that.
    My final test is running a really large file on yaggi and seeing how it compresses.
    The file I’m running is a modified version of the UTF8Test file that has been increased in size to 1190MB.
    The reason I’m testing on yaggi is because my poor raspberry pi would actually die if I ran this on there.
    Now we have some results!
    brotli results for large file pass
    After running it 5 times to get the average of each, the new compression changes look like they did a significant increase in speed. In terms of real time, its about a 30s increase. Percentage wise we decrease our total time by 20%!
    Nice!
    This case yielded some good results. I guess the reason the other ones didn’t have such great results were because they were so small.
    I’m glad that I got some good results from this. The original results were not very great.
    Just to be sure again I ran a check to see if the Files differed:

    diff -s UTF8TestNewBrotliVersion.txt.br ../out/UTF8TestLarge1.txt.br
    Files UTF8TestNewBrotliVersion.txt.br and ../out/UTF8TestLarge1.txt.br are identical
    The next step is to talk to the devs at brotli and see if these changes can be merged.
    Link to PR: https://github.com/google/brotli/pull/733

    December 11, 2018 01:27 PM


    Joshua Jadulco

    Release 0.4: PR #2; A blank canvas and some pastels

    Introduction

    For this Pull Request, I worked on an internal open source project developed by my classmates called CreativeCollab – a turn-based creative writing app. In its current form, it’s still in very alpha stages as the server isn’t properly setup and the Client component of the project is the only functional component of it so far. The Issue can be found here and the PR that I made can be found here. What I implemented in the project is basically front-end work concerned with improving the UI as I see fit (…my design choices are a bit…questionable) as it was too bland for my tastes:

    CreativeCollabOriginal

     The Process

    To improve the UI, I was mainly concerned with the Client component of the project and the CSS files involved. There’s only CSS file for the project as it stands right now and that is creative-collab.css file and so I had to mainly add my CSS customizations to this file while making minor in-line CSS styling in the app.js and board.js to properly accommodate the CSS changes that I made.

    First stop – the header doesn’t stand out. It needs more flair and some spice. A simple change in the background of the header itself is fine but the font of the header needs to stand out against the normal fonts:

    creative-collab.css

    Collab-header

    app.js

    collab-header_appjschng

    Second stop – the right panel that contains the ‘Player 1’ and ‘Player 2’ text seems like an unknown to the user – why is there a panel there in the first place? What is it for? It needs to immediately tell the user what it is used for and it needs some spice.

    creative-collab.css

    players-container

    board.js

    players-container_boardjschng

    Third stop – there’s a random ‘Disconnected’ and ‘<App-player….>’ text floating around. Hmm…maybe a simple panel that displays ‘Player Information’ is needed? Perhaps just to show their Status and Name:

    creative-collab.css

    playerinfo-panel

    board.js

    playerinfo-panel_boardjschngs

    Fourth stop – what is that “Tell Tale” button design…it reminds me of 2000s. Nostalgia. Windows 90s. Early internet forums. It needs to be modernized. Since the project is using Bootstrap 4 framework, there’ an easy way to implement a modern button design:

    board.js

    button

    The End Result

    CreativeCollabDemo

    by joshj9817 at December 11, 2018 10:54 AM

    Release 0.4 – PR #1; Clockwise

    Introduction

    This PR is a continuation of the work detailed in Release 0.3 – PR #3; Tick,Tock,Work with the goal of improving the Clock Widget by making it able to actually update to the current time while at the same time limiting the updates to be made on app widget to conserve the battery use and optimize performance of the widget. The complete work is detailed in the PR that can be seen here. 2 new major features are also added – the ability for the user to configure the timezone that will be used to display the time in the Clock Widget and a widget preview. Therefore, there are 3 Major Features that are added in this PR: an updated Clock Widget functionality, Clock Widget configuration, and Clock Widget preview.

    The Process

    Updated Clock Widget functionality

    Implementing the updated Clock Widget functionality is no easy task in itself – I spent 3 hours alone on this task, the majority of it spent on trying to debug the app. In the end, I ended up using Text Clock to create a digital clock UI in the widget that actually runs and doesn’t need updating by the widget manager every minute or so. Based on my understanding of Android app widgets, app widgets are basically ‘activities’ but remotely activated and controlled using their respective apps using Broadcast Senders and Receivers. Now, straight to the point – the code snippets added were small but made a very huge difference:

    Updated clock_widget.xml file
    clock_widget_changes

    Clock Widget Configuration Screen

    If you’ve ever used an Android app widget on your screen, chances are you ran into a configuration screen for that widget to setup what it will display once it’s been instantiated. That configuration screen is exactly what I managed to implement for the Clock Widget and it’s a part of the specs requested in the Issue where the Clock Widget was requested in.

    First, when you want to create an app widget for your app, make sure that you know exactly if you will need a configuration screen or not. I spent roughly 2 hours trying to figure out how to even display a configuration screen using manual methods because my I didn’t think that I would need a configuration screen. As the saying goes, “Measure Twice, Cut Once”, I should’ve done that. In the end, I had to save all the code files that makes the current app widget work in a separate folder, delete the current Clock Widget-related files, and then recreate the Clock Widget this time using the Android Studio widget creation screen and actually check the ‘Configuration Screen’ option so that Android Studio would scaffold the code needed for the Configuration activity of the app, saving the developer the trouble of manually creating the files:

    widget_creation_screen

    Once the configuration screen for the widget has been successfully created via code scaffolding in Android Studio, the highlighted parts in the next images will show the snippets of the code that’s responsible for the app widget to recognize that a Configuration Screen is needed before a widget instance is instantiated:

    AndroidManifest.xml
    androidmanifest_clockwidgetconfig
    clock_widget_info.xml
    clock_widget_info_config

    Now to the real deal which actually controls the implementation for the Configuration Screen. The Configuration Screen activity is a special type of activity – it’s temporary activity that will save data preferences for the Clock Widget that the user has selected. Think of it this way – when Amazon receives your order and begins to process it – it sends the information related to the items that you ordered to the Amazon warehouse or retailer that it will retrieve the item from. For my Clock Widget, I need to create a config screen that will allow the user to pick a timezone that they want the Clock Widget to display the time from. Therefore, a collapsible options of time zones are needed. The easiest method of implementing that is to create a Spinner in the respective XML file, manage that Spinner in the Activity file of the Configuration Screen, and load the Time Zone options from a String array of Time Zones that’s been collected using the Android TimeZone API into an ArrayAdapter so that the Spinner can use it as a resource when displaying the options. The following images will demonstrate the aforementioned steps by highlighting the appropriate snippets of code:

    clock_widget_configure.xml
    clock_widget_config
    ClockWidgetConfigureActivity.java
    clockwidgetconfig_p1

    However, that’s still not the end of it – that’s only the UI part of the Configuration Screen that’s been completed. You need to remember that you need to save the preferences as data so that the app widget will be configured properly. Based on my implementation, I need to save the selected time zone by the user and seen it’s a Spinner object, I need to access the value selected  in the Spinner object, process that value as a ‘preference’ then store it into an Intent object so that the Configuration Screen activity will send that information to the app widget once it’s been destroyed, and then the app widget will retrieve the ‘preference’:

    ClockWidgetConfigureActivity.java

    clockwidgetconfig_p2

    ClockWidget.java

    clock_widget_prefs

    Clock Widget Preview

    This feature is implemented pretty easily – all you have to do is take a screenshot of the app widget, optimize it properly, upload it into the res/drawable folder of your Android Project, and then have the app widget’s XML info file point utilize that image as the preview for the widget:

    clock_widget_info.xml
    clock_widget_preview

    The End Result

    worldclockwidgetdemo

    by joshj9817 at December 11, 2018 10:13 AM


    Vincent Wong

    Contributing to 2048 Repo cont'd

    GitHub Repo: 2048

    Previously, when I was working on this repository it turns out my code format was not consistent with what they had. I've noticed that they have a .clang-format file which means there should be a way for me to run it and auto format my code. After searching, I have found a couple of online formatters. When I used the online formatters it either spits back the same code format or it loads until timeout, either way I don't get what I am trying to do. I went back to searching for clang-formats and I came across a Visual Studio Code extension that allows me to use the .clang-format file.

    After running the clang-format on the source files, compiling errors start to arise from the .clang-format file which I have posted an issue on. After trying to fix a few of the errors the owner of the repo believes the error could be related to the VSCode extension. Apparently, they were using npm to run the clang-format file. Maybe I should have asked how they did the formatting earlier, but then again I did help test other ways of formatting that they had not thought of.

    by Vincent Wong (noreply@blogger.com) at December 11, 2018 05:55 AM


    Ryan Vu

    DPS909 — rc 0.4 — Week 2

    DPS909 — rc 0.4 — Week 2

    Within week two, I made 2 PRs for the internal project GitHub-Dashboard.

    Add LanguageList and TopLanguage

    Language List

    The two components will be use on main page. For language list, I used listRepos to get the list of repositories, then map them into language array, calculation sum on unique elements, then sort them into a complete language list. The process looked long but there was currently no support for getting languages through all repos in Github API (can only list languages in a specific repo).

    Top language used

    For top language used, I simply just took the most one in the array above, then used devicon to illustrate the language (this library was probably the only one with programming language icons for now).

    Add User Info

    User Info Component

    The challenging one was the account age. Calculating duration from a time in the past, in exact number of year/month/day is quite complicated. Even the most popular library moment.js didn’t support it. Luckily I found a plugin of moment.js called moment-precise-range which met exactly the need, or else it would have been a bit hurt to implement it on my own.

    Link


    DPS909 — rc 0.4 — Week 2 was originally published in pynnl on Medium, where people are continuing the conversation by highlighting and responding to this story.

    by pynnl at December 11, 2018 05:48 AM


    Ruihui Yan

    Wrap up

     

    What a journey!

    After about fours months of open source hacking, the adventure has come to an end so now it is time to reflect on the past four months.

    When I first started OSD600, I have little to none experience with open source. My only interactions with open source projects were going to Github and download a free software for personal use. It never crossed my mind the idea of contributing to those projects. I always thought I didn’t have enough experience to be part of them.

    Then I started this course and everything changed. I was “forced” to work on open source projects, starting small and gradually increasing the level of difficult. I found this approach very effective to me because it made me feel empowered to do more and made me see what I was capable of.

    I started with Brackets, doing translation and now I am working on Firefox bugs. In every step, there is a learning. Every PR I submit, I bring and take a bit of knowledge. Working on Firefox enabled me to work on this one too

    When I first started working on PRs, I never thought I would be able to be part of an event like Hacktoberfest. Competitions and event like this used to intimidate me and now, here I am, wearing my Hacktoberfest shirt. I have come a long way but this is not the end. This course make me realize how important open source is and how embedded it is even in my life. We all use open source projects every day in our lives and still we don’t have the credit it deserves. Certainly I haven’t.

    Now that I got the taste of being part of this wonderful and welcoming community, I don’t want to stop. Even though this is my final semester, I will always have a very special place in my heart for open source.

    Goodbye for now.

     

    main() {
        printf("goodbye, world\n");
    }
    
    

    by blacker at December 11, 2018 05:33 AM


    Andriy Yevseytsev

    Release 0.4 - PR3



    As I mentioned in previous blog post, for the very last PR for DPS909 course I will stay in JavaScript algorithms repository (https://github.com/manrajgrover/algorithms-js).

    After researching the repository I found out that GCD algorithm that is already in the repo does not filter the cases when user enters real numbers instead of integers. 

    So, I have improved a GCD algorithm. Now GCD algorithm filters the cases when user enters real numbers instead of integers. Tests are improved also. Codes are below:


    by Andriy Yevseytsev (noreply@blogger.com) at December 11, 2018 05:32 AM

    Release 0.4 - PR2

    For the second and third pull requests for the last release I decided to create couple of the algorithms and tests for open source project on GitHub. In October I found the repo where owner asked to implement standard algorithms using JavaScript and write tests for them.
    For this PR I took Leonardo numbers function, commented about this in the issues and started to work. This function takes n as an input, and then returns all Leonardo numbers up to n. 

    https://en.wikipedia.org/wiki/Leonardo_number

    Code of this function you can see below.

    Test for this function:

    I tested my function and created the pull request.
    https://github.com/manrajgrover/algorithms-js/pull/69

    by Andriy Yevseytsev (noreply@blogger.com) at December 11, 2018 05:07 AM


    Ruihui Yan

    Release 0.4 Week 3

    Now, the bittersweet moment.

    The OSD600 course has come to an end. Although it is a relief, it is also kind of sad especially since I had such fun working on all these PRs and Hacktoberfest.

    I have spent the last week working on the Firefox ESLint bug and two other PRs.

    I will be dividing this blog into sections because I have quite a lot to talk:

    Firefox

    It was quite an experience. Everything was new to me, from the type of bug to the version control tool, everything was different from what I was used to. Because of this lack of experience, I messed up the commit submission because instead of using the recommended moz-phab wrap, I used the Arcanist. Mark told me to abandon that submission and redo it with moz-phab.

    I encountered so many troubles doing so that I decided to do redo the whole thing from scratch:

    After running ESLint with --fix, the number of problems went down to 234:

    From here it was all manual fixes. Two hours later, everything was fixed:

    This time around, I used moz-phab submit and I was given two links:

    Now, I am working with Mark to finish up a few details he wanted me to change.

    H2

    For the external PR, as I said last week, I wanted to work on H2, a project that I started working on during Hacktoberfest. My plan was to change the Youtube and Vimeo default video player to Plyr, but after a further consideration, I decided it wasn’t a good idea because it might not support all the different platforms they might want to support in the future. So instead, I added Dailymotion support, in addition to Youtube and Vimeo (which I also added).

    As bonus, with my newly acquired experience, I also ESLinted the whole project.

    Internal Project

    Finally, for the internal PR I wasn’t sure in which direction I wanted to go. To be honest none of the projects made me excited to work on so I spent hours testing new ideas and bugs for different projects.

    So I stumbled upon this simple issue, to fix the manifest.json and I took it. It didn’t take me long to finish it so I decided to find another bug to work on.

    Browsing thorough the projects and PRs, I found that people were having issues after submitting their PRs, where Travis failed to build the PR successfully (here and here). I went into Travis CI to see what was going on and after a little bit of investigating, I found out the issue happens when it does npm test and runs ESLint on the project. So I forked the project and ran the test on my local. Sure enough, there were 258 ESLint complains on the project, and since I have already worked on this kind of issue with Firefox, I set out to fix them.

    It is funny how something I’ve learnt recently came so in handy for another project.

    But it wasn’t as easy as the Firefox because with SenecaBlackboardExtension project, because it was a domino kind of problem, you fix one and that leads to another 10. And it took a little and a lot of changes to the project.

     

     

    I am so happy to get so far in this course and so grateful for all the help my colleagues and my professor have given. I hope I have contributed to the open source community in a meaning way.

    I will gather my thoughts in my final blog post for this course, in which I will reflect on my past and look forward to my future with everything I have learnt. Stay tuned.

    by blacker at December 11, 2018 04:40 AM


    Xuan Dinh Truong

    OSD600-Release0.4-Overall

    In this final blog of open source course, I want to summary how I have learned about Git and GitHub and how I have got involved to open source projects.

    First, here is a list of 3 PRs that I have got involved in this release 0.4:

    1. https://github.com/nodejs/node/pull/24584
    2. https://github.com/seiyria/bootstrap-slider/pull/886
    3. https://github.com/ywpark1/portfolio-generator/pull/23

    Second, if you get family with Git and GitHub, you will know that they are some powerful things to open source projects. Many contributors will make their own changes and push their commits to GitHub by using Git commands and how an organizer will control their system through all commits and minimize their jobs through GitHub. I hope this link can help one has a clear understanding about Git and GitHub if one is new to them like me. It is totally different than I imaged. I don't really remember how many time I need to struggle with Git commands in this course. I always make some mistakes and now I can say I learned something from making mistakes. My professor and my friend (Jeffrey Espiritu) always help me when I get stuck. Thank you all a lot. After that, I "maybe" understand how the whole process works because there are lots of git commands that I have not learned yet. In addition to, it sometime takes lots of time for setting environment when I am working on open source projects.

    Third, I have got involved to some external and internal open sources projects through 4 releases, such as nodejs, bootstrap-slider, Minimal-Todo app, SenecaBlackboardExtension, portfolio-generator and so on. I tried to do different things in each project to learn something new.

    To sum up, open source course relates to spending time and having patience.




    by Dinh Truong (noreply@blogger.com) at December 11, 2018 04:15 AM


    Jeffrey Espiritu

    Making a Difference in the World

    (One Commit at a Time)

    Since Hacktoberfest, I’ve been working on the Bootstrap-Slider project. I chose to work on it for two reasons: (1) because I was planning to use it for another project for another course and (2) my work would count towards my Hacktoberfest contributions for my open source course: OSD600. I am glad I chose to work on this project because I have been able to improve my front-end development skills working with jQuery and CSS and I’ve gained tremendous experience helping maintaining a large open-source project. But also, unexpectedly, I’ve been able to make meaningful contributions to a project where there are many other projects and users who actually depend on it. I am in a position where I can actually make a difference in this world through open source and actually give back to the open source community.

    Like I mentioned in my previous post, my involvement in this project would be uncertain after the Fall 2018 semester ends. But I don’t want to leave without having contributed what I feel is enough. So that’s why I wanted to help with the Infamous Issue #689.

    Right now, Bootstrap-Slider does not officially support Bootstrap 4. There have been many users asking if and when the project would support Bootstrap 4 in some way. This has been a much requested feature — as you can see below. But neither of the maintainers use the Bootstrap-Slider in production anymore so the project is officially in maintenance mode.

    Infamous Issue #689

    Issue #689

    External Projects

    There are 55 projects that are dependent on Bootstrap-Slider. Bootstrap-Slider hasn’t been updated to work with Bootstrap 4 and the effects are downstream. It has affected React Bootstrap Slider which is a ReactJS wrapper for seiyria’s Bootstrap Slider component.

    Depedents

    Tooltips no longer function with Bootstrap 4

    react-bootstrap-slider

    Plan for supporting Bootstrap 4

    react-bootstrap-slider

    Another project, Wicket-Bootstrap, has dropped the slider component (which was using Bootstrap-Slider) from its set of Apache Wicket components in its latest version as it was updated to support Bootstrap 4.

    Remove slider page from sample app

    wicket-bootstrap

    Attempt at Bootstrap 4 Compatibility

    There has been an attempt to support both version 3 and 4 of Bootstrap by user madflow but efforts have stalled on that front.

    Upgrade to Bootstrap 4

    It’s been almost two years since Infamous Issue #689 was opened about Bootstrap 4 compatibility. Then finally in August this year, David Lesieur steps in and makes a ton of progress and is so very close to actually getting everything to work except there are 7 unit tests that were failing.

    Making a Difference

    Now I can make a noticeable difference. I know there are a lot of users who were asking for the feature and I know the source codebase pretty well (50%+ of it) so I thought I could come in and help out. And that’s what I did. Through hours and hours of research and debugging, I was able to track down the bugs causing the 7 unit tests to fail and I was able to fix each and every one of them.

    Now I am making a difference in the world (one commit at a time).

    Feels good, man.

    by Jeffrey Espiritu at December 11, 2018 03:44 AM


    Shawn Mathew

    DPS909 Open Source Review

    It is the end of the semester and during this semester I have had a completely new learning experience which tends to happen going into a new semester. But this time it was with a professional option and not a mandatory course for my degree. This new learning experience was open source. When I first registered for open source, I did not exactly know what I was getting myself into. I knew that open source had to do with a lot of work with GitHub and my experience with GitHub was not exactly great so I was looking to improve in that area. As far as that, I was not exactly sure what else to expect.

    I half-expected to work with languages that I had not worked with before and I was sort of correct. I did work on a pull request with Ruby which was completely new for me, but that was the extent of my experience with working with a new language. It was mostly working on projects that I had no knowledge in which threw me off at the beginning. Like most programming courses that I have taken, when it comes to assignments, we are given an outline of what needs to be completed, and projects and normally started from scratch. For a few courses, a template is given which is for us to start from, but other than that, not much has been completed. For open source, projects are completed or completed as close as possible, and it is anyone’s job to work on those projects to either fix bugs or to make improvements to the project. If there was a project that I was working on, I had to take the time to review and understand what had already been completed in order to fix a bug or make an improvement.

    I think overall, even though that this was a new experience, I am glad that I took this course in the end. My goal of understanding GitHub better at the start was definitely completed, but having this new understanding of open source could be a helpful experience for me in the future. If I end up working in a field that uses open source technology, I could go back to what I learned in this class and apply it to the working field. I think the most important aspect of the class that I’ll take into the future is the collaboration aspect of open source. Its difficult to work on an open source project on my own, which is what I am used to doing when it comes to assignments. For open source projects, collaborating is not only encouraged but it is needed in order to complete the tasks correctly. By collaborating with others, there is a clear focus on how a project needs to be completed since there is not a specific set of instructions that is given from the start, which is what students are usually used too when it comes to most programming courses.

    Moving on from open source and this semester, I am not entirely sure what is next. I know for sure that I want to keep my doors open. I know that if I want to continue in open source, I would have to make improvements in order to get better in this area. Mainly when it comes to collaborating because I normally work on my own, which I also prefer. But I did learn during my eight month work term before starting this semester that maybe collaboration might not be such a bad idea. There were times where I was working on projects given to me by my supervisor and a second opinion on my approach on completing projects seemed necessary to complete tasks as efficiently as possible. So hopefully in the future I can improve on collaborating with other people when it comes to projects. These people can be people that I know, or just complete strangers from halfway across the world. Either way, the need to improve when it comes to collaboration is required in order to go anywhere, not only in open source, but any other aspect of software development.

    by Shawn at December 11, 2018 03:39 AM

    DPS909 0.4 Release – Blog 3

    Hello again. Its the end of the semester and I’m submitting my last pull request for my Open Source class. Like always, the semester has flown by but like always, I have to worry about major assignments being due, and final exams. Luckily I only have one exam this semester which does not seem to be too difficult. But before I get to studying for my final exam, I’ll have to make my final pull requests. Last week I talked about my other major assignments for my Web Services and iOS class. I was able to finish both major assignments for those classes which allowed me to direct my complete attention to my Open Source class. I also talked about having C# fresh in my head after completing my major assignment for Web Services so I was determined to find an external issue to work on that used C#. For the 0.4 Release, I had to complete one pull request in an external open source project and two pull requests for an internal open source project.

    External Pull Request 1

    External Pull Request 2

    For my pull request for an external open source project, I was not able to find an issue in C# so I had to find another issue which used other languages. Even though I only needed to complete one pull request for the external open source project, I completed two. For the first external pull request, I added a source code file which would display the convex hull for an already set collection of points. The following is a link to the issue.

    Convex Hull

    With a given a set of points in the plane, the convex hull of the set is the smallest convex polygon that contains all the points of it.

    ConvexHull

    The following are the steps to find the convex hull.

    Let points[0..n-1] be the input array.

    1. Find the bottom-most point by comparing y coordinate of all points. If there are two points with same y value, then the point with smaller x coordinate value is considered. Let the bottom-most point be P0. Put P0 at first position in output hull.
    2. Consider the remaining n-1 points and sort them by polor angle in counterclockwise order around points[0]. If polor angle of two points is same, then put the nearest point first.
    3. After sorting, check if two or more points have same angle. If two more points have same angle, then remove all same angle points except the point farthest from P0. Let the size of new array be m.
    4. If m is less than 3, return (Convex Hull not possible)
    5. Create an empty stack ‘S’ and push points[0], points[1] and points[2] to S.
    6. Process remaining m-3 points one by one. Do following for every point ‘points[i]’
    7. Keep removing points from stack while orientation of following 3 points is not counterclockwise (or they don’t make a left turn). a) Point next to top in stack b) Point at the top of stack c) points[i]
    8. Push points[i] to S
    9. Print contents of S

    The following are the graphs containing the example points that were used to find the convex hull. The third image is what is displayed after the convex hull is found.

    A4_PR1

    A4_PR1_3

    A4_PR1_2

    For the second pull request in an external open source project, I was looking through some of my old pull requests that I made and I noticed that there was an issue for a repository that I made contributions for. So I decided to make a pull request for that issue.

    Date does not update automatically #1

    External Pull Request 2

    For my pull requests in the internal open source projects, I continued my work for the GitHub-Dashboard. Last time I was able to download the files from a specific repository so I decided to make a count for the files that were downloaded. The following is the pull request.

    Internal Pull Request 1

    For the next pull request for the internal open source project, I decided to continue working with octokit/rest.js to gather metrics for the dashboard. Using octokit/rest.js I put together some source code to gather the pull requests and issues for a specific GitHub repository. The following is the pull request that I made.

    Internal Pull Request 2

    Overall, open source was a new experience mainly because of the collaborative aspect. Normally, I have worked on assignments on my own and it is something that I have gotten used too. I realize though that in the future, the experience that I have gained during this course can help if I get into a field that works with open source technology because I will need to collaborate with others, even if I don’t see them at all.

    References

    Convex Hull

    octokit/rest.js

    rest.js

    by Shawn at December 11, 2018 03:31 AM


    Julia McGeoghan

    Adding ESLint to Firefox

    For those unfamiliar, ESLint is an open source linter used to ensure a Javascript codebase applies to certain style guidelines or avoids problematic patterns. It’s a common tool used by open source projects to help improve code quality and save time. Firefox — a popular open source web browser created by the Mozilla foundation— has been making use of it in since 2015, but given the size of the application there are still some sections that need coverage.

    A Mozilla employee named Mark Banner has written about this initiative to establish ESLint in the remaining unit tests not yet covered in mozilla-central (a Mercurial repo that contains all code required to build Firefox). Recently I was given the opportunity to help him with these changes and decided to go for it.

    Why I wanted to help

    Part of the reason was to get more familiar with ESlint, but there were others as well. As I’ve learned more about the history of open source I’ve had a growing interest in contributing to Mozilla as it’s had a profound impact on the movement. What’s more, I’ve used Firefox a lot over the years, and I’ve found it genuinely interesting to learn a bit more about its codebase and how it’s maintained/run.

    What you should get out of reading this

    If you’ve only ever contributed to open source through Github you’ll find that Mozilla runs Firefox differently from you’re probably used to. I’m hoping this post will help educate others new to the company’s tools about their purpose/history, as well as potentially help if they decided to contribute to Firefox in the future. You’ll find that a lot of comparisons between these tools and Github/git are made, as this comparison often helped me understand the ‘point’ of these tools a bit faster.

    Bugzilla

    Background

    Bugzilla is Mozilla’s primary means of tracking bugs for Firefox. If you read a bit about the application on its about page you’ll find that it was created 10 years before Github started and was one of the first ever products created by Mozilla after it launched in 1998.

    Projects that use Github as their primary means of tracking bugs will have the benefit of using a tool that has mass adoption. However, the benefit of Bugzilla seems to be that Mozilla effectively has full control over the tool itself since they own it. From the ground up it has the means to suit the company’s own use cases and needs. A full team employed by the company can be dedicated to maintaining and building their own bug tracking application.

    Using it for the first time

    This was where my work on this fix began. I needed to choose a particular bug/fix from those listed below:

    The list of ESLint-related bugs that were available

    I opened mozilla-central locally and took a look at the size of the directories listed in each bug. I decided to take a smaller one as I wanted to have as much time as possible understanding how to properly contribute my fixes and set up Firefox from source.

    Claiming a Bug

    I expressed interest in the bug through the page dedicated to it. From there I was assigned to it by Mark.

    Building Firefox from Source

    To work on Firefox’s source code I needed to get a copy of it onto my Windows machine and successfully build it. Following these build instructions I was able to get a working copy without too much hassle.

    The only real problem that I ran into at this stage was that my attempt to clone the code failed the first time around. However running hg clone https://hg.mozilla.org/mozilla-central a second time seemed to get it working.

    Mercurial

    Background

    Prior to working on this project I had only known about git and TortoiseSVN version control systems.

    Mercurial and git started days apart. In fact they started for largely the same reason; the free version of a version control tool called Bitkeeper was being withdrawn and people were looking for a free alternative.

    There are many, many more differences between the two and personally found that this stack overflow post did a very good job of explaining some of them, as did this one. On a personal note, a near immediate advantage I found to using Mercurial — which is a common one that people mention — is that the command line interface made more sense. It felt cleaner and easier to use compared to git.

    Tracking my changes with Mercurial

    My changes to dom/manifest needed to be in two separate commits. The first was straight-forward; I needed to remove dom/manifest from the .eslintignore file and run ./mach eslint — fix parser to generate automatic changes. Much like git I ran hg add and hg commit -m "Message name" to add any updates to staging and commit them.

    The manual changes took more work, but weren’t so bad. Every time I changed a file I’d run ./mach eslint — fix [name-of-directory] to test that my changes fixed existing ESLint errors.

    Testing my Changes

    Since I was changing a lot of unit tests it was important that I ran them and made sure they were still working. However I had a difficult time determining how to do so. After some searching I found this page, but it still wasn’t very clear to me what I should be doing in this scenario. I basically ended up figuring out that I could test my changes with:

    ./mach mochitest [path-to-directory]

    But even then I later found that:

    ./mach test [path-to-directory]

    Worked as well. My assumption is that the latter is the better command to run since test is a standard command name for running unit tests.

    Submitting to Phabricatior

    Background

    Phabricator is an application created and run by a company called Phacility, a company with an ambitious vision and future for its products. Unlike the tools mentioned previously it had its start more recently in 2007, with the start of it being written by Evan Priestly during a Facebook hackathon. Over time it became the de-facto code review tool for Facebook, then the creator left the company, open sourced the tool and named it Phabricator. If you want to read more about it the original project creator has written post you should read here.

    A screenshot of Phabricator. Its UI looks a bit familiar…

    Working with Phabricator

    To work with Phabriactor I needed to follow this guide and create an account. Then I needed to set up arcanist, a tool that can be used to interact with Phabricator locally on the command line.

    Arcanist is the official command line tool built by Phabricator. Without any custom implementation this tool is likely the default used by a lot of companies. However, Mozilla has created a custom wrapper around Arcanist called moz-phab, which is the recommended tool to use over Arcanist for the company.

    So when working on Firefox you will track your changes with mercurial, or hg . When submitting your commits for review and interacting with their review tool you’ll want to use moz-phab .

    Improper Submission

    When I first submitted a review to Phabricator I used Arcanist instead of moz-phab. However this had the effect of combining my commits into a single review, which didn’t suit the reviewing conventions that Mozilla seems to favor. I ended up abandoning this initial differential to make a proper one.

    I had a question about my changes but mistakenly put them in the summary of my differential when submitting via Arcanist. So be sure not to do this, just make a new comment through the window at the bottom of the page after submitting.

    To make it easier for the reviewer to look over my changes I ended up running moz-phab submit {revision number}the second time around. This automatically separated my commits into separate reviews so I could have them looked at individually.

    However, running that moz-phab command had an unintended consequence; it changed my original commit messages for some reason . Having used git exclusively for pushing changes I wasn’t familiar with this sort of behavior and wasn’t sure why it was happening. I ended up fixing my commit messages within Phabricator itself, hoping to make a proper fix with Mercurial nearer the end of my submission.

    Final Changes

    When I was done fixing the comments received in the Phabricator review, I needed to do two last things:

    1. Fix my commit messages
    2. Rebase my changes

    It was at this portion where I ran into my most difficult to handle issue. It wasn’t a particularly complex one but had a higher margin of error than any others I encountered so far.

    The Problem

    The issue started when I ran hg histedit to change my commit messages. A textfile opened and I changed lines like:

    pick {commit number} Bug 447937 — Enable ESLint

    to

    mess {commit number} Bug 1508991-Enable ESLint

    Then saved the file, closed it, and ran hg histedit — continue. After that I ran hg log to see if my changes were applied. I was greeted with the following:

    Which basically showed that my latest commits and all my work/changes were no longer being kept track of by Mercurial, or at least not properly.

    Whenever I ran a Mercurial command, whether it was log, status, diff…I always got this error:

    warning: ignoring unknown working parent e48a8ac1884d!

    Which basically stated that my latest — parent — revisions weren’t being recognized.

    The Solution

    At the time I didn’t have that much experience with Mercurial, so I was nervous to make any new changes that could end up breaking things further. Looking on google and stack overflow gave some simpler/surface level suggestions but none of those seemed to work. I decided to ask for more help from people that were better experienced with this workflow.

    After corresponding with Mark it was decided that I should try working off the default branch, then incorporate my code in my latest differential. I ran the below commands:

    $ hg checkout default
    $ hg pull -u
    $ arc patch — nobranch D13208

    But after running arc patch — nobranch D13208 I ran into the following error:

    There were some changes in .eslintignore that needed to be dealt with properly before my differential’s code could be successfully merged in.

    One option for dealing with this might be equivalent to the following:

    1. create a new branch from default
    2. do a hard reset back to the revision my differential originally worked off of
    3. run `arc patch — nobranch D1308` in this new branch
    4. rename my commits if I still need to
    5. rebase, fix .eslintignore conflicts
    6. merge the code into master

    Almost There

    Even though Mercurial and Git and similar they aren’t identical. For example branching works differently with Mercurial than with git. From Mercurial’s documentation:

    Branches occur if lines of development diverge. The term “branch” may thus refer to a “diverged line of development”. For Mercurial, a “line of development” is a linear sequence of consecutive changesets.

    Some smaller differences include:

    • For Mercurial the master branch is called default.
    • You switch branches by using hg update [branchname] instead of hg checkout [branchname], like you might expect with git.

    I still have this last portion of the contribution to complete. It’s not necessarily difficult, but still a bit time consuming since I need to understand Mercurial better to do it properly. When I have my fixes done I’ll be able to run moz-phab submit, and from there my differential in Phabricator will likely be updated and everything merged into master.

    Conclusion

    I won’t lie, this contribution proved frustrating near the end. But overall I’m very happy that I decided to take this on, since working with these new tools and learning about their history has made me feel like I participated in open source on a more profound level beyond a simple PR on Github. I’m also psyched that I was able to contribute to Firefox itself, given its presence in the industry and what it stands for.

    by Julia McGeoghan at December 11, 2018 03:27 AM


    Stephen Truong

    Release 0.4.3

    For Release 0.4.3 I ended up finding a small project that provides deals to users when they visit certain sites, the issue was that deal notifications would pop-up even if the user is on a Facebook Page instead of the actual store website. The reason for this is some sites use Facebook Pages as their store instead of their own website.

    To fix this I looked over their files and found some code which excluded Wikipedia pages, then I copied over the code and changed it to work with Facebook.

    Project: https://github.com/nareddyt/discover-rewards-notifier
    Issue: https://github.com/nareddyt/discover-rewards-notifier/issues/86
    Pull Request: https://github.com/nareddyt/discover-rewards-notifier/pull/87

    by Stephen Truong at December 11, 2018 03:19 AM


    Julia McGeoghan

    Contributing to Github-Dashboard

    Miscellaneous lessons learned about React, organizing an open source project, and SVG icons

    In Github-Dashboard our navbar still needed a bit of work. It had some issues with its responsiveness, was creating odd css conflicts across pages and a new component needed to be to make it a bit more apparent that the user was logged in. So I decided to update this part of it and get more experience with flexbox along the way.

    This post will be a collection of some note-worthy things I learned or experienced while working on the project. They’re not in any particular order, they’re simply a collection of things I found notable and potentially useful to others.

    React State Mutations

    Earlier when creating my component I was setting fields onto this.state directly, writing something like:

    this.state.login = info.login;
    this.state.avatar = info.avatar;

    However this is frowned upon. As stated in the React docs:

    Never mutate this.state directly, as calling setState() afterwards may replace the mutation you made. Treat this.state as if it were immutable.

    So basically we want to avoid using this.state because it risks unintentional changes to the states we want to keep track of. In reality using this.setState could work in a lot of cases, but treating it as immutable is a best practice to avoid strange edge cases. If you are looking for some novel ways to get around this issue, you might want to try reading this post.

    I then changed my code to the recommended method:

    this.setState({ ...info });

    If you’re wondering what the ellipses are doing, they’re a really cool technique I learned about after reading over a Gothub-Dashboard pull request submitted by pynnl. What they’re doing is automatically defining and initializing fields in state that match the info object. So it makes it so that this.setState({ …info }) is essentially equivalent to:

    this.setState({ login: info.login });
    this.setState({ avatar: info.avatar });

    Organizing an Open Source Project

    Sometime during Hacktoberfest I came across an amazing post where the maintainers of the project tracked the progression of it through screenshots right before/after a new change was merged into master.

    https://medium.com/media/cf9a53446e84ec7bac4921b236d69cef/href

    I think this project is exemplary of a certain aspect of open source, in how it was so rapid and whimsical in its development. It shows how quickly a project can evolve over time with multiple people contributing, and how different it can look from each iteration merged into master.

    What they made reminded me of Github-Dashboard; each contributor has a different store of knowledge they’re working on, in turn making contributions a bit unpredictable, which is a good thing. I knew enough to help give the project direction, but when I was reading PRs I was often introduced to better ways of going about things I hadn’t even thought of when writing issues for them.

    A bit about SVG vs Font Icons

    There is still some work to be done for Github-Dashboard until it’s complete, and before that happens there’s a certain improvement I think should be made so the project is stronger.

    Up until recently my only real experience with website icons were font based and png/jpeg based ones, but lately I’ve been experimenting more with SVG ircons. As I get more experience working with with them they really seem like the most superior option to choose from. They:

    • Are abundant online and easy to create with the right tools
    • Can be manipulated directly in whatever html they are embedded in and manipulated with custom css
    • Are sharper/better quality than icon based images

    Inline SVG vs Icon Fonts [CAGEMATCH] | CSS-Tricks

    by Julia McGeoghan at December 11, 2018 03:10 AM


    Adam Kolodko

    Build Firefox

    I don’t think I would have been able to begin working on this project if I wasn’t around other students working on the same thing. The documentation is fine as long as you don’t hit a problem.

    I successfully was able to follow the documentation up until I hit the first snag running ./mach bootstrap where I ran into an error that I now know was due to rust not being installed correctly.

    Please choose the version of Firefox you want to build:
    1. Firefox for Desktop Artifact Mode
    2. Firefox for Desktop
    3. Firefox for Android Artifact Mode
    4. Firefox for Android
    Your choice: 1
    Running pip to ensure Mercurial is up-to-date...
    Requirement already up-to-date: Mercurial in c:\mozilla-build\python\lib\site-packages (4.8)
    Your version of Python (2.7.15) is new enough.
    Error running mach:
    
        ['bootstrap']
    
    The error occurred in code that was called by the mach command. This is either
    a bug in the called code itself or in the way that mach is calling it.
    
    You should consider filing a bug for this issue.
    
    If filing a bug, please include the full output of mach, including this error
    message.
    
    The details of the failure are as follows:
    
    CalledProcessError: Command '[u'c:/Users/Adam\\.cargo\\bin\\rustc.exe', u'--version']' returned non-zero exit status 1
    
      File "c:\mozilla-source\mozilla-central\python/mozboot/mozboot/mach_commands.py", line 43, in bootstrap
        bootstrapper.bootstrap()
      File "c:\mozilla-source\mozilla-central\python/mozboot\mozboot\bootstrap.py", line 439, in bootstrap
        self.instance.ensure_rust_modern()
      File "c:\mozilla-source\mozilla-central\python/mozboot\mozboot\base.py", line 624, in ensure_rust_modern
        modern, version = self.is_rust_modern(cargo_bin)
      File "c:\mozilla-source\mozilla-central\python/mozboot\mozboot\base.py", line 584, in is_rust_modern
        our = self._parse_version(rustc)
      File "c:\mozilla-source\mozilla-central\python/mozboot\mozboot\base.py", line 460, in _parse_version
        stderr=subprocess.STDOUT)
      File "c:\mozilla-source\mozilla-central\python/mozboot\mozboot\base.py", line 375, in check_output
        return fn(*args, **kwargs)
      File "c:\mozilla-build\python\lib\subprocess.py", line 223, in check_output
        raise CalledProcessError(retcode, cmd, output=output)

    Some googling the error messages later I had the idea of trying to run the attempted command manually and follow those error messages instead.

    CalledProcessError: Command '[u'c:/Users/Adam\\.cargo\\bin\\rustc.exe', u'--version']' returned non-zero exit status 1
    > c:/Users/Adam\\.cargo\\bin\\rustc.exe
    > error: no default toolchain configure

    Ultimately the solution was to reinstall Rust manually.

    The next snag was when I ran ./mach build, the error in this case the error was

    0:06.98 ERROR: GetShortPathName returned a long path name: 
    `C:/PROGRA~2/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/HostX64/x86/cl.exe`. 
    Use `fsutil file setshortname' to create a short name for any components of this path that have spaces.

    After some time trying to apply fsutil on C:/PROGRA~2/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/HostX64/x86/cl.exe I found that components in this case meant folders. I don’t know if it would be possible to have the error message identify the folder with spaces and suggest an appropriate command, such as fsutil file setshortname"C:/PROGRA~2/Microsoft Visual Studio" MVS.

    At that point the actual fixing began, my task was to enable ESLint on the ‘dom/ipc’ path in firefox. The only headaches I had were spending a few hours trying to change a variable that was supposedly already declared in an upper scope. The solution the developers gave was to just to disable the warning for that line, I should have probably asked them earlier about it.

    Once It was all done I had to clean up my commits, I have a style of committing every little change and hoping it gets squashed during merge.

    To create a pull request I had to go back into the documentation. I followed the Arcanist install instructions. The error warning: Watchman unavailable: "watchman" executable not in PATH ([Error 2] The system cannot find the file specified) showed up once or twice, I couldn’t solve it but it didn’t seem to stop anything from working.

    I followed them until the end where I asked a class mate on how to actually run a push command with arc, and they told me that I should just use the simpler python C:\Users\Adam\phabricator\moz-phab instead.

    Overall I liked the experience. I came in knowing that for the first bug fix most of the time will be spent on building the project, just like almost all the other open source projects I worked on this semester. And I’m coming away from this class hoping I will be able to keep contributing to software like Firefox and VSCode after I graduate.

    by ahkol at December 11, 2018 02:42 AM


    Stephen Truong

    Release 0.4.2

    This is my 2nd PR for Release 0.4, due to the various projects/exams that have been going on the past few weeks I have delayed this till now. For this release I spent quite some time looking for things I could complete with the time available to me, in the end I failed to really find code I truly understood. After looking through the repos that other students submitted for Release 0.4 I ended up finding one where I could just practice a bit C and commit any type of C program.

    The program I submitted is a calculator that functions in a way that I find rather odd, but this is the method that was required in the iOS Objective-C course I’m currently taking for an assignment. The same logic was used as the assignment except in C, going over this program gave me some refreshers on C in general as I haven’t really touched it in over a year. Despite it taking a while to fully grasp, it works.

    Project: https://github.com/rsenwar/Make-Your-First-Pull-Request
    Issue: https://github.com/rsenwar/Make-Your-First-Pull-Request/issues/56
    Pull Request: https://github.com/rsenwar/Make-Your-First-Pull-Request/pull/125

    by Stephen Truong at December 11, 2018 02:38 AM


    Andriy Yevseytsev

    Release 0.4 - PR1

    For the last PR for the internal project for this course I have chosen to work a little bit more on the repo that I maintained for the last month - Seneca Blackboard Extension.
    As a maintainer of one of the internal projects of release 0.4 I did the following
    • Cleaning the issues & communicating with contributors about issues
    • Reviewing & accepting & closing the pull requests, communicating with contributors about PRs
    • Some settings & Repository cleaning
    As a contributor, for the first PR of release 0.4 I did the following
    • Connecting Travis CI to the repo
    • Editing Readme file

    by Andriy Yevseytsev (noreply@blogger.com) at December 11, 2018 02:33 AM


    Shawn Pang

    OSD600 0.4 Reflection

    For this post, I am going to reflect on the pull requests that I made for Release 0.4. Now, while I do not need to make this reflection, I am going to do it just because I want to talk about what I did for trying to create tests for app that run through React, and the huge amounts of trouble that I went through just to create such simple pull requests. You can view each of my blog posts on create the tests here:

    Post 1

    Post 2

    Post 3

    Now, when trying to create these test, I had no idea what I was getting into. For the most part, I am not great at creating checks for errors within code. I usually just read through the code and try to determine what the problem being cause is based on the error messages and solve the problem myself. However, for this, I have had very little experience with React apps and creating tests for those apps, which is why I wanted to take on these pull requests, since I would have a reason to learn about testing for error, and it would help me get over my Achilles’ heel of testing. I never expected how difficult it would be to make even the simplest of tests.

    I spent the most time on the Curated-tv-and-film pull request, as it was the one I started with, and was the one that I wanted to figure out how to work on the most. I skimmed through the code, and after noticing how the toggleFilter event just changes a single state from false to true and shows the filter bar, I figured that would be the easiest place to start. It seemed simple in concept, just retrieve the state of the showFilters value, run a event once, and check the showFilters value again to see if it had changed. The real problem was trying to find information on how to check for changes to state. I went straight to the official document for testing React apps, but there was nothing that I could use there. I then went on to the document on Test Utilities for React, but I still couldn’t find anything on testing for state there. After jumping from post to random post, I finally stumbled onto a post that explicitly has the title that it was testing for state, but that was after several hours of testing what seems like it would work, and trying everything that I could possibly find. The post it located here, and it was just a random post that was not connected to the official documents at all.

    Searching for the a way to run an event was even more difficult. I could not figure out for the life of me how to to emulate running an event. The biggest issue was that the events were not declared as a method within the app, but instead were a lambda that was given an event. So, instead of toggleFilter(), it was instead toggleFilter = () = > {}. Since ever test that I could find online was for a method instead of a lambda, I could not find a single thing to help me figure out what to do to run the lambda. I eventually had to ask the owner how I should go around doing it, and the owner quickly informed me of what I to run, which was wrapper().instance().method(). The worst part was when I pulled the current version of the repository, my simple test that barely had anything within it stopped working, and I was suddenly back at square one after all that work.

    After this. I kept searching to see whether I could figure out what the new issue that arouse was, to no avail. After so much time spent trying to figure out errors and getting the tests working, I just gave up, and made a request to check to see whether the HTML div was being rendered correctly. While simple, it still is an important part of the app to check, because if it fails that app will not show anything at all, but it was not even close to as much as I would have liked to do. I really wish there was more documentation on creating tests for React, especially for problems that will occur when attempting to create a test, since there was not much information to go on that I could use to determine how to solve my problem.

    The lack of information became the biggest problem when working on the Creative-Collab repository. I couldn’t even get shallow rendering working with the app, and had absolutely no idea why. When it turned out that the problem was because of the React-Quill component that I could not do anything about, that was one of the worst times of my life, since I spent so much time trying to find the solution to the problem and it turns out that I could not do anything about it. Not mentioning that I planned to create tests for the repository was my fault, as another student went on and created tests for the components, but they were not part of the tests that I was planning to create, so it was more or less fine. The biggest problem was that I had spent so much time trying to figure out how to shallow render, that I have very little time left to work on create normal tests for the repository. Since I had no way of solving the shallow rendering problem, I could not even attempt to create tests the same way that did for the previous repository, and since I could not use mount I was couldn’t even mount the repository correctly. This forced me to simply use the same tests that I used for the Curated-tv-and-film, which bothered me quite a bit since it was not nearly as complex as I would have liked.

    The only part of this release that was not that bad was updating tests for the python exercises. The problem with running the exercises was troublesome, but quickly resolved after asking the owner of the repository about it. Once that problem was solved, updating the test was simple. All I needed to do was look at the previous code and see what had changed, and make the changes necessary to accommodate those changes.

    This release was very dissatisfying for me. After all that work, several hours of searching online for the solution to the errors that I was running into, all I could do was create simple tests and push those to the repository, since those were all I could get working. I hope that either documentation on testing React apps will grow in the future, or I get better at searching for issue solutions on Google.

    by sppang at December 11, 2018 02:25 AM


    Derrick Leung

    OSD600 Release 0.4 PR#2

    Another internal project I took interest in was GitHub-Dashboard. The idea is nice, in my opinion:

    • track status of things you’ve worked on (Issues, Bugs, Projects)
    • track projects you want to try next, showing you possible good bugs you could do

    and so on.

    Looking at the code, the use of API confuses me; having worked with react native previously, putting the Get for GitHub’s login API in an href seems odd, especially in regards to retrieving the output. Since the login redirects from the Welcome to the Home page, the output (code) is put into the header/https link of the home page. Personally, I’m thinking that it would be better to have a button call an ASYNC function, which then calls the GitHub API . The first API returns a code, which then must be used with another API to retrieve the access token. Ideally, only the access token would be passed on to the home page.

    However, the above is easier said than done. For some reason, my ASYNC function’s try block isn’t being read, or it’s still awaiting the fetch of the API. The documentation regarding the use of the API is here: https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps/#web-application-flow

    It also seems that a client_secret is required to use the 2nd API to retrieve the access token – that which I have no idea where to get from. I feel like the documentation could use some actual examples as the idea is good, but for developers that are rather new such as myself it can be difficult to navigate.

    No matter, I’ll return to this later. I will (as always), look for ways to contribute while working at this.

    Something that I noticed in regards to inconsistency was that the Navigation bar was only present in the home page – if a user uses the bar, clicks on features, and clicks one of the features, it brings them to a page without said navigation bar. The only way back to the home page and looking at other potential features is to use the browser’s own back button. I think it’s best if the web application has the control over the paths to each other, rather than relying on a browser which could have unintended behavior.

    Before:

    githubdashboardbefore.png

    After:

    githubdashboardafter

    I applied this change across the relevant pages in the project, and made a pull request.

    PR: https://github.com/deepanjali19/GitHub-Dashboard/pull/41

    Technology/Language: Javascript, HTML

     

    by derrickwhleung at December 11, 2018 01:33 AM


    Shawn Pang

    OSD600 0.4 Pull Requests Post 3

    For this release, I plan on creating tests for the Curated-tv-and-film repository. I plan on dealing with issue #20, which simply asks for implementing new tests for each component. This is the next step of from my previous blog post on testing, and there is more information on what I am planning on doing there.

    To start off, I looked into the tests that exist already within the repository and tried to determine what possible tests that I could add. Looking at each component’s tests, they are currently made up of a test that checks whether the app can still render, and a test that checks whether the code has the same design as a snapshot. Looking at some websites that have tutorials on making tests for React Apps, I am trying to test the methods that exist already within the app, but need to figure out how to test to see the value of what each method changes.

    To start off with, I began with a simple test to see whether the toggleFilter method was doing its job and changing the showFilters state from false to true when it is triggered. It seemed like the ideal place to start, since it would be the start of many other method test, and would be a basic outline of any test to come after it. What I was not expecting was how difficult it would be to try to figure out how to create a test. I began by searching online for a good reference to use when attempting to implement a test for the method. I found information on the matter, but very little of it resembled the code within the app that I was planning on working on. There was a blog that I managed to dig up on the matter here, and it gave enough information for me to begin work on testing the state, but the time needed just to find that one page was impossibly long. I nearly spent 2 days just because I could not find any information on how to test the state anywhere within the official jest testing documents. I was hoping to find it within the official page for testing react apps with jest, but could not find a single thing on how to test for a change in state. The only thing I could find was a test for text, and I had absolutely no use for something like that for the tests that I was planning. All I could do was keep searching until I found something that actually did testing for an app similar to what the current app looked like.

    So, after all that searching, I finally have a reference point to use in terms of testing for state. The current code looked like this:

    it('testing for changed toggle state', () => {
      
      const wrapper = mount();
      
      expect(wrapper.state().showFilters).toEqual(false);
    });
    

    It was simple, easy to understand, and the most important part, ACTUALLY WORKED. It was very difficult just trying to figure out what to use to receive the app’s state, as just trying to get the state was an ordeal in of itself. I tried to use wrapper.state.showFilters, expecting it to return the state, but kept giving the value undefined whenever attempting to run it. I then went on to testing for props to see whether I happened to be testing for the wrong thing, but that didn’t get me anywhere either. I found an example of a tic-tac-toe app that created tests for buttons, but since the component was not a button and instead the value was the state within, that information did little to help me as well. I then began to look into the shallow wrapper API to determine whether there is a specific method to retrieve the state, but all I could figure out was how to find a component, and while there was a method on how to find the state, I could not figure out in any way how to get it working, as it states that as an argument you need to pass the key into the method, but whenever I attempted to do that I would always get undefined as the value.

    Now that I managed to get the state, I went on to figuring out run the event that changes the state from false to true. Trying to figure this out was even worse then trying to figure out how to retrieve the state of the app itself. I searched the web some more, and found several pages on testing, such as one that talks about the react testing library, information on the react test utilities, and a very detailed document on how to test for components within the app, but still nothing on how to tests events, The most I could find was the simulate() method, that could be used with buttons to simulate them being clicked on, but I could not figure out for the life of me how to use them with the events called within the app, as the events were not called simply just by a button click, since they were using dropdown lists, and I couldn’t find any information on how to use simulate with a dropdown. I finally gave up and asked the owner of the repository whether they know how to run the events.

    The owner informed that I would need to run the instance() method along with the method that I wanted to run, and it would look like the following: wrapper.instance().method();. After all this time, I finally got something to work with. I modified my code to run the toggleFilter event and my code look like this:

    it('testing for changed toggle state', () => {
      
      const wrapper = mount();
      
      expect(wrapper.state().showFilters).toEqual(false);
      
      wrapper.instance().toggleFilter();
    
      expect(wrapper.state().showFilters).toEqual(true);
    }); 
    

    I ran the code, and I finally got it working! After all this time, I was finally getting somewhere! After the test checked to see whether the toggle worked correctly, I decided to update my repository to the current version from the original, as just trying to do this has taken so much time that my version has fallen quite far behind. So, I ran the command git fetch upstream and git pull upstream master, and tried to re-run the tests to see whether they will work with the new changes.

    And…nope. The simple test that I had implemented into the repository just to check to see whether the state was being changed when an event was called was no longer working. Suddenly, I was receiving the error: TypeError: Cannot read property 'showFilters' of null and my test was failing to run. This is especially infuriating since the test was so simple that I was not expecting a single thing to go wrong. I informed the owner of the repository of the error and began looking into what could be causing the error.

    For now, I used one of the pages of information that I found on React testing and used it to start with a test that checks for whether the app is rendering a div properly. It looked like the following:

    it('always renders a div', () => {
      const wrapper = mount();
      const divs = wrapper.find('div');
      expect(divs.length).toBeGreaterThan(0);
    });
    

    I was not able to figure out how to deal with the problem with checking for state. It is most likely a change into how the app checks for its state, and the changes that the owner of the repository made probably made it so that the state cannot be retrieved in a normal way, but I have no real way of knowing what would have that effect, other then carefully analyzing every change that the owner has made since my previous version, which would take too much time for me to do. I will have to go with just the test for a div and leave it at that, in pull request #208.

    Now, I will be looking into creating tests for the Creative-Collab repository. I will also be creating tests for this repository, helping with issue #22. I was still not able to figure out how to solve the problem with running the tests without shallow rendering, but another student wanted to assist in creating tests for the repository, and so I asked him to help look into the problem with rendering. He said that it is most likely a problem with using the React-Quill component, and the only way to deal with the problem is to remove that component. Unfortunately, because that component is crucial for use within the app, removing it is not an option, and so I could now do anything about using shallow rendering. I attempted to use mount instead of shallow, but when attempting to test using that, I received the error ReferenceError: MutationObserver is not defined and the tests failed. I used the code that I had for testing the other repository and used it for testing within this repository.

    it('always renders a div', () => {
      const wrapper = mount();
      const divs = wrapper.find('div');
      expect(divs.length).toBeGreaterThan(0);
    });
    

    While I was working on determining possible tests for the Creative-Collab repository, another student came and created tests for both the StoryBoard and Board component, before I could manage to. I forgot to make a mention on the issue that I planned on working on creating tests! I have already spent so much time just trying to determine how to create tests for the Curated-tv-and-film repository, that I did not have anymore time to start working on any more complicated tests for the Creative-Collab repository. So, I just decided to commit the test for divs into the repository and finish it up, with pull request #37.

    I’m a bit bummed on these pull requests. I kept trying to create tests for these repository, but problems keep popping up, whether I can’t run a function that I should be able to do, or the tests don’t come out the way that I would like it to. I will be going into it in my reflection post, as I have a lot I would like to talk about in terms of creating tests. For now, I’ll just say that you should really try to understand how the app works before going into how to test how it works.

    by sppang at December 11, 2018 01:21 AM


    Derrick Leung

    OSD600 Release 0.4 PR #1

    One of the internal Projects I took a look at was DarkChatter. The idea for this project was interesting to me:

    • chat not hosted by a server
    • no sign up – anonymous chatting, therefore, no worry about things such as data- mining (big companies like Facebook run into scandals regarding this topic)
    •  no data connection is needed – users don’t have to worry about going over their data usage

    As with any project, I forked the repository and cloned it to my local machine.

    I also took the latest pull request, which laid out the bare-bones features for the application – it was not merged yet as each pull request for this repository requires at least one review (a solid practice).

    (I manually copy pasted the changes to test said files; there should be a better way to test pull requests, and I will look into finding it).

    DarkChatter

    Definitely bare-bones features; however, I think due to the nature and style of this application, it’s acceptable to leave it for now.

    Testing the buttons seemingly did nothing; this is something I think I’d want to fix, as there is no current user-side response to let the user know if something has succeeded or not.

    I added pop-up messages to the functions corresponding to Register, Connect, and Send buttons when the functions fail – I left the message simple and without too much jargon as the average user will not understand, and doesn’t need to know.

    An example of what I’m talking about is shown here, with the changes bolded:

    public void clickConnect(View v) {
        NsdServiceInfo service = mNsdHelper.getChosenServiceInfo();
        if (service != null) {
            Log.d(TAG, "Connecting.");
            mConnection.connectToServer(service.getHost(),
                    service.getPort());
        } else {
            //User-side warning when connect fails
            Snackbar connectError = Snackbar.make(v, "Unable to connect! There is a problem with the service.", Snackbar.LENGTH_SHORT);
            connectError.show();
            Log.d(TAG, "No service to connect to!");
        }
    }

    DarkChatterMessageFailure

    And so the user will only get popups when their application isn’t working as intended.

    (Also, sanitized the name of the app appearing on the icon on the phone/emulator – to DarkChatter, doesn’t make sense to leave it as a default).

    PR: https://github.com/DarkChatter/DarkChatter_Android/pull/9

    Technology/Languages: Android Studio, Java

     

    by derrickwhleung at December 11, 2018 01:12 AM


    Shawn Pang

    OSD600 Release 0.4 Post 2

    For this pull request, I will attempt to add tests into the repository exercism, and update tests for several of the exercises that exist already. Now, since there are already several tests that exist already, and the changes that I plan on making are just updating tests that exist, I plan on doing multiple issues at once to make up for that simplicity of each individual issue.

    Now, the issues that I currently plan on dealing with are #1623 and #1621

    So first, I forked the repository and cloned the fork onto my PC. I seemed that I would need to use python for this repository, so I checked my python installation that I have previously installed using python -V, and I had installed python version 3.7.1, so I went on to testing to see whether I can run the tests. I then received the error FileNotFoundError: [WinError 2] The system cannot find the file specified and my progress was again at a standstill. I informed the owner on issue #1624 to ask about the issue, and he informed me that the python testing file was written with Unix systems in mind, and that on Windows there needed to be some changes to some of the code that runs the scripts in order to get it running. He then created an issue for the changes to make the repository cross-platform and made the changes necessary to have it work on a Windows system. I then pulled the repository into my forked one, and again tested it to see it the changes would work. The tests worked fine, and so I began work on the tests.

    So, looking through the tests, it is not entirely clear what is necessary to update the tests up to the current version. For each issue the contributor states that a certain test suite is out of date, and needs to update to the latest canonical data. For some tests, it clear that it is missing a test or two that the data shows, but for others the differences are not so obvious. I decided to start with the more obvious tests, and to ask the owner of the issues about the less obvious changes. After looking through each test, I decided to start by updating the tests for binary-search. I looked at the changes between the data and tests, as well as used the previous commit to the test file as a reference to determine what the last version had in comparison, the only difference was that it was missing a test at the end. Using the previous tests as a outline, I created the test:

    def test_bounds_cross(self):
            with self.assertRaisesWithMessage(ValueError):
                binary_search([1, 2], 0)
    

    I committed and pushed the previous test to the original repository for review, created pull request #1631, and went on to work on the next test. Next, I decided to work on the tests for the hamming file. The data for this file had several redundant data removed from the file, and so I simply removed the tests that checked for the previous data and pushed those changes to the original repository, and created the pull request for the changes, pull request #1632.

    Since I was spending so much time with creating tests for my other repositories, I decided that I would not spent too much time on this external pull request. Even then, I still had a bit of trouble just trying to run this repository, since it was not Window capable. I am glad that I help the owner of the repository know that his tests were not Windows compatible, and helped him figure out how to make his repository Windows compatible.

    by sppang at December 11, 2018 01:10 AM

    December 10, 2018


    Hojung An

    Final Project - Stage 3 - Wrap Up

    So, in the previous post we saw an average of 2% improvement building with -Ofast -march -mtune compiler options.

    To see if it could be improved further I looked at the codes, hoping to find that one place where I can optimize.

    Long story short, I was not able to optimize.

    This is what happened:

    In the beginning when I decided to take bzip2 for the project I read it's introduction on the bzip2 website.
    Here it is mentioned that the bzip2 was built using Burrows-Wheeler Transform and Huffman algorithm to create lossless compression.

    I had no idea what these algorithms are about and how they work, so I Googled them in hopes to find some explanation in plain English.
    I came across two videos on YouTube (BWT, Huffman).
    After watching these videos it is pretty clear that there is no way I can improve these algorithms. These algorithms have been battle tested and have proved to work very well, so well that BWT is used for human DNA sequencing.

    Then this is it, there is nothing I can do. Except, as mentioned in the Huffman video the block size can affect the performance.

    So, my quest started looking where the block size is defined. Strangely it's not globally defined in bzip2, Ctrl+shift+f on VS Code didn't give me anything.
    Let's look at the manual, bzip2 was nice enough to include a manual about bzip2, and see if we can find something about where/how block size is defined.
    After reading through the manual and the code it looks like the block size is defined as 100000 * blockSize100k.
    The blockSize100k is a variable holding number from 1 to 9. 
    bzip2 has a flag where a user can specify the block size from 100,000 to 900,000 by using -1, -2, ..., -9 or --fast, --best where --fast is 100,000 and --best is 900,000.

    So, in theory if I change this 100,000 to a larger number it should work better.
    Time to test the theory.
    bzlib.c
    Before:

    After:

    Here I changed the block size from 100,000 to 1,000,000. In theory this should improve Huffman tree, therefore improve the compression.
    Test Result:
    hmm...
    This error looks like some kind of bug with memory size when we changed the block size from 100,000 to 1 million. Maybe 1 million was too big, so I tried with 200,000 but this also displayed same error output.
    I'm starting to think that it's not as easy as just changing that 100,000 to whatever value.
    I spent hours and hours trying to find where in the code that would cause this error. My thought was that:
    1) somewhere in the program it looks at the size of the block
    2) based on the size of the block, the object that holds compressed symbols changes, or
    3) the conversion table size is changed, or
    4) size of a buffer is changed

    After numerous hours I was not able to find the code/function that would take of larger block size value.
    It was very disappointing that I was not able to figure this out. I really wanted to test my theory, but it was just too complex of codes for me to tinker with. This almost makes me want to go to University and study Computer Science, almost.

    Since I failed to test my theory I thought I would have fun and find a piece of code that I know I can writer differently.

    I found some pieces of codes that I think it could be re-written in compress.c
    Line 197
    Before
    Looking at this code, I thought it
    1) it's redundant to have rtmp2 re-initialized every iteration
    2) ryy_j++ could prob be part of while condition
    then, if these two theory makes sense and the changes does not change the value after the while loop, then SIMD could probably used for the rtmp2=rtmp; rtmp = *ryy_j; *ryy_j = rtmp2;, since definition of SIMD is Single Instruction Multiple Data.
    So the code looks like this after

    Line 370
    Before
    To be honest, I have no idea what's going on in this code.
    Looking at the code though, since the parameter value is sequence from 0 to 49, would it make any difference if this was put in a for loop?
    Let's find out.
    This is the code after:


    Line 415
    Before
    Again, no idea what's going on but the pattern is same as the code on Line 370.
    I'll also put this into a for loop and see if anything changes.
    This is the code after:


    The Build
    For testing the changed codes I re-compiled the program with the -Ofast -march -mtune options.
    The program compiled without any errors, but this warning message was shown.
    Hopefully it still works.
    Only one way to find out, it's compress the files.
    Fingers crossed....
    .
    .
    .
    .
    .
    .
    Test Results
    ?????????????????????
    ????????????????

    Ok, looks like something worked for the better.

    hmm........
    ..........
    ......
    ...
    I wish I could explain this result, I really do.
    I wasn't expecting anything, it's surprising to see such difference.
    BTW, the difference is calculated using the original value.
    I'm going to take a wild blind guess that it's SIMD at work in the while loop.

    Now, the true test, can a file be compressed and decompressed without any error?
    When it was optimized with just compiler options, the file was able to be compressed and decompressed without any problem. Well, this makes sense since no code was touched so if it didn't work that would be weird.
    But now with the code changed, although the changes I made are not significant, it's probably a good idea to test it.

    Here is how I'll be testing this.
    1) compress each file (random.txt, image.jpg, final.docx) with different name
    2) decompress compressed file
    3) use diff to check

    aarchie
    Test 1 : random.txt

    Test 2 : image.jpg

    Test 3 : final.docx

    xerxes
    Test 1 : random.txt

    Test 2 : image.jpg

    Test 3 : final.docx


    So looks like everything is good; it's compressing and decompressing without the content changing.
    The program works as it should with the edited code on both aarch64 and x86_64 machines.



    Conclusion
    To sum up, I was able to see some improvement in the performance.
    As mentioned earlier, it's disappointing not have been able to test with block size.
    The program was successfully, though not by much, optimized with compiler options 
    (-Ofast -march -mtune).
    It was really unexpected to see such improvement with the edited codes.
    The improvement was about double of the -Ofast -march -mtune.

    In the end I was able to reach an average of 7% on aarch64 and average of 6% on x86_64 machine.

    Even if the edited version of the code is not an legitimate optimization, the compilation options alone was able to produce an average of 2% improvement on both architectures.

    With this I conclude my project.
    It was an journey working this project, and a quite interesting one too.

    by Hojung An (noreply@blogger.com) at December 10, 2018 11:47 PM


    Volodymyr Klymenko

    Testing React Components in Creative-Collab with Jest and Enzyme

    For my next contribution to the Creative-Collab app, I chose the issue, which requires to add some unit tests to the project. There has already been created one simple test which checks if the app renders without crushing. I wanted to add some tests for React components on the client side of the project.

    Testing tools

    There are a couple of tools you can use for testing your React app:

    Testing - React

    To set up Creative-Collab React app, we used create-react-app. This tool ships with Jest, which is a JavaScript testing framework developed by Facebook. That’s why there was no reason to switch to any other testing frameworks.

    Also, we installed a testing utility developed by Airbnb called Enzyme.

    Enzyme is a JavaScript Testing utility for React that makes it easier to assert, manipulate, and traverse your React Components’ output. (from Enzyme docs)

    Enzyme usage

    There are three ways to render components using Enzyme:

    • Shallow Rendering
    Shallow rendering is useful to constrain yourself to testing a component as a unit, and to ensure that your tests aren’t indirectly asserting on behavior of child components. (from Enzyme docs)
    • Full DOM Rendering
    Full DOM rendering is ideal for use cases where you have components that may interact with DOM APIs or need to test components that are wrapped in higher order components. (from Enzyme docs)
    • Static Rendered Mockup
    enzyme’s render function is used to render react components to static HTML and analyze the resulting HTML structure. (from Enzyme docs)

    One of Creative-Collab’s contributors was trying to write a test for rendering the whole application without crashing using Full DOM Rendering, however, the test failed for some reason. At the same time, shallow rendering of the same test worked well. I did some investigation, and I found the reason why the test couldn’t pass using Full DOM rendering. In our app, we use an external component called React-Quill, which is a text editor. As I found, the test failed at rendering Full DOM for this component, and it’s quite bad. On the one hand, you have a ready-to-use component (it’s also quite complicated, and you cannot rewrite it by yourself in a short time). On the other hand, it fails your tests, which is not good.

    I filed an issue in the react-quill repository in order to try to find the reason why testing doesn’t work.

    I wrote tests for two components: StoryBoard and Board.
    StoryBoard is basically a container for the text. So I just added a test to render it using shallow.

    Board component is basically a page that contains a game screen. It has a list of players, StoryBoard, text editor, and a send button. I wrote four tests for this component:

    • it renders without crashing
    • it has a list of players
    • it has a story board
    • it updates state correctly on clicking the button

    Because of using react-quill in this component, I couldn’t really use Full DOM rendering where I wish I could, so I used shallow rendering.

    You can see the tests I wrote in my pull request.

    by Volodymyr Klymenko at December 10, 2018 11:44 PM


    Thanh Nguyen

    Final Blog Post for OSD600

    This blog post documents my experience going through Open Source Development (OSD600) course at Seneca College.

    Throughout the semester, I’ve learned a lot about open source development and collaboration.

    I’ve learnt how to collaborate with other developers through many different tools, especially Slack (collaborating chat tool), GitHub (web-based code hosting service), and Git (version control system). In addition, I’ve been introduced to new concepts regarding open source development through many different excellent sources on OSD600 wiki, and having a wonderful professor like David Humphrey for guidance.

    Through working on open source projects, I’ve had to deal with many different ideas and concepts (ex: languages, technologies, …) that I’ve never seen before. It was challenging at first, but after I get through them, it was satisfying and gave me more incentive and confidence to participate in more open source works.

    Overall, I think this is a great course that introduces valuable information and practices that will get me ready for the workforce after I graduate.

    by Thanh Nguyen at December 10, 2018 10:49 PM

    OSD600 Release 0.4 Part 3

    This is the final blog post for release 0.4 in OSD600. For the final external Pull Request, I decided to continue working on and improving upon ESLint issues.

    This time, instead of creating eslint configuration file, I had to fix eslint errors. The repo that I was working on is exercism/javascript, basically this repo contains exercises in Javascript.

    The eslint configuration was created in package.json, but the exercises contain lots of eslint errors that haven’t been fixed. Thus they were added to .eslintignore file.

    So for my PR, I just had to pick one or more exercises from the repo and fix the eslint errors in those exercises.

    Here are the step that I took to complete the PR:

    1. I ask to work on some of the exercises on this issue.
    2. Then I forked, and cloned the repo to my computer.
    3. Created a new branch called issue-480.
    4. Look up the exercises in .eslintignore file (these are still not fixed for eslint errors), choose 3 of them and remove them from the list.
    5. Go to each individual exercises folder and run npm install.
    6. The guide for this issue states that I can run npm run eslint to see the eslint errors then fix them, but since I’m coding in Visual Studio Code, I can already see eslint errors in Problem panel without the need to run the command.
    7. Fix the errors either by putting in a single line comment like // eslint-disable-next-line rule1, rule2 or fix the actual code.
    8. After fixing all eslint errors, I made a commit and pushed it to the origin repo and made a PR.

    Here are some example of eslint error fix that I did:

    Screenshot (66).png

    For the above error, I just had to add a comment specifying that some eslint rules should be disable for the next line:

    // eslint-disable-next-line no-bitwise, no-restricted-properties

    Screenshot (65).png

    For the object destructuring error, it was a bit more complicated and I had to do a google search and came upon this answer on stackoverflow, and all I had to do was to fix line 17 to const { value } = this.head;

    And that’s it for this blog post. Thanks for reading!

    by Thanh Nguyen at December 10, 2018 10:25 PM


    Aleksei Kozachenko

    My Open Source experience in 2019

    In this pots I just want to sum up and list projects I contributed to in 2019.
    I did not have a lot (had almost none) of experience with open source before this year, however I was familiar with git, and had a very little experience with GitHub.

    GitHub has a nice “Activity view” where you can see all the repositories you contributed to and what kinds of activities you did through out a period of time.

    You GitHub profile also has a historical diagram that represents your commit history day by day during the last year.

    GitHub Activity view

    So in 2019 I contributed to 8 repositories. Most of my contributions were made to next projects:

    1. SwifterSwift — a collection of over 500 native Swift extensions.
    2. NodeChat — A group chat application written in Node and SocketIO.

    It can be seen from the graph above that 67 of my 68 contributions were made in September-December 2019, when I was in school. This is approximately 13.5 times more contributions than I made in 2018!

    This view shows that 53% of my activities were pull requests making it my major GitHub activity, followed by commits (30%).

    Unfortunately GitHub does not show the languages that were mostly used, however I can tell that the majority of my contributions were done using Swift and JavaScript.

    Other projects I worked on were:

    1. Filer — a POSIX-like file system interface for node.js and browser-based JavaScript.
    2. IINA — a modern video player for macOS.
    3. Charts — a charting framework for iOS.

    I don’t know know if I will be able to make 13 times more contributions in 2019 than I did in 2018, but I am pretty sure that I will try outnumber 2018 numbers!

    by Alexei Kozachenko at December 10, 2018 09:54 PM


    Yuecheng Wu

    Release 0.4 – Learning a New Language

    Hi everyone, I am back with the last post of Release 0.4. The semester is almost over, and I have enjoyed working in the Open Source environment so much this whole semester. I will talk more about this in another post. Stay tuned.

    For the last internal pull request, I decided to challenge myself a little and finish strong. Since I had so much fun learning Python during the Hacktoberfest, I decided that I will learn another new language this time. Also, one of the internal projects that I am interested in called “Portfolio Generator” uses C# language. Therefore, I decided to learn C# and add a unit test for the project (#26).  

    I started by looking into what a unit test is. It is programs that will test the code to maintain code health, ensure code coverage, and to find errors and faults before customers do (Source: Getting started with unit testing). Then, I went through one of the Microsoft’s documents on how to create and run unit tests for managed code. It has taught me some basic syntax on writing unit tests. After that, I looked into XUnit.net because that’s what was suggested in the issue. I found this very helpful documentation: Getting started with xUnit.net which has helped me to get started. 

    Since I am still new at C# language, I decided to write the unit test for DeleteFiles() function in the HtmlGenerator.cs. It looks like this:

    using System;
    using Xunit;
    using Portfolio_generator_console;
    
     namespace UnitTests
    {
        public class PortfolioGeneratorTests
        {
            [Fact]
            public void TestDeleteFiles()
            {
                string templateDir = Path.Combine (Directory.GetCurrentDirectory ().ToString (), "templates");
                string[] files = Directory.GetDirectories (templateDir);
                
                string expected = null; // expected result after the deletion
                 Console.Write("Please enter the file you want to delete: ");
                var fileName = Console.ReadLine();
                var path = path.Combine(templateDir, files);
                
                HtmlGenerator.deleteFiles(path);
                 string actual = path;
                 Assert.AreEqual(expected, actual, "File not deleted correctly."); // compares the expected result with the actual result
            }
        }
    }

    This is still the trial version and probably needs modifications afterwards before getting merged. So, more work will probably be needed later on. Here is the Pull Request: Unit test for DeleteFiles function #29.

    Learning a new programming language is always fun, and thanks to Open Source, we have gotten the perfect opportunity to learn, practice, and improve. 

    by ywu194 at December 10, 2018 09:33 PM


    Steven Le

    Release 0.4 Looking Back

    Looking back overall in this course has been a lot of personal growth. I would like to sum up each release with one standout sentence.

    First release was a lot of swallowing my pride and asking for help with the kind individuals in my class.

    Second release was dictated by planning and lack of discipline which is a path I am working to better.

    Third release was the time of research, finding ways to effectively look for information and asking for help.

    Fourth release, the last release is the most important, handling personal health and given that I have grown (even just a little) over the semester, the progress would be stunted if you did not keep yourself alive.

    Overall this may be short, but I think that if anything it helped me narrow what I got from the course in the back of my mind to continue working on.

    by Steven Le at December 10, 2018 09:26 PM

    Release 0.4 Week 3

    Oh my this week is a doozy.

    Having said what I needed to say so far, I don’t think that there needs to be much said. I finished the Pull Requests! (albiet with barely any time to spare). But before that I do need to address some concerns I’ve had over the last release, motivation issues being the main point here.

    Some background: I have a hyper active thyroid and have been getting myself treated over the past 2 years. For the uninitiated the thyroid handles hormones for your body. This in itself is bad because when it is overactive your body is essentially running above pace, for example 1.5x speed. With the treatments I have been getting the goal was to make it under functioning in order for me to regulate it with medication. Needless to say with the rut I was in, I made myself a decision and  I got myself checked out again at a walk in clinic. It was time to see what was wrong because I’ve been psyched out this month in particular. I was suggested to consume more of the medication I have been taking to regulate the thyroid. It worked and I feel more in line to do work and study (great time in particular coming into exam period at my school).

    Lets move on to my Pull requests:

    https://github.com/SyamSundarKirubakaran/android-jetpack/pull/20

    https://github.com/SyamSundarKirubakaran/android-jetpack/pull/19

    I whacked out 2 pull requests to supplement my downturn last week. One of them wasn’t the greatest because of crunch time but I feel content overall due to the situation I was in.

    https://github.com/SyamSundarKirubakaran/android-jetpack/pull/19

    This pull request went through and did some housecleaning. I had felt that my previous Pull Requests were alright but the pictures and links were not consistent. I wanted to open a pull request addressing and fixing them without having the baggage of a full pull request that I did later.

    https://github.com/SyamSundarKirubakaran/android-jetpack/pull/20

    This pull request went through a short video and some research that amounted to very little, so I did what I could here. This section of the documents handled sharing. Sharing is a very small topic in the grand scheme of Android but it is used very often. The documents I were made for an introduction to the space, along with showing the new way to do things. Previously you had to create and action handler to handle every share but google created a new API to handle it. Because it is still relatively new there isn’t much information to it (along with it being a lot easier to specify stuff with so little information). Overall the research if I knew was this scarce, if given the time I would of asked the project head to help me with researching to fulfill this document more.

    by Steven Le at December 10, 2018 09:21 PM


    Thanh Nguyen

    OSD600 Release 0.4 Part 2

    In this blog post, I’ll talk about the second External Pull Request I’ve done for OSD600 Release 0.4.

    For this PR, I decided to work on issues that have to do with ESLint, particularly setting it up. So ESLint is a pluggable linting utility for Javascript and JSX. More information and documentation can be read on eslint.org.

    The repo that I was working on is Kentico Developer Community Site. The issue that they had is that their project code is full of comments to disable eslint check, and they would like someone to create eslint configuration files for two of their projects’ directories.

    After I skim through the configuring ESLint user guide provided, and similar issues, I was able to successfully create a PR.

    So there are many ways to create an eslint configuration files:

    • JavaScript – use .eslintrc.js and export an object containing your configuration.
    • YAML – use .eslintrc.yaml or .eslintrc.yml to define the configuration structure.
    • JSON – use .eslintrc.json to define the configuration structure. ESLint’s JSON files also allow JavaScript-style comments.
    • Deprecated – use .eslintrc, which can be either JSON or YAML.
    • package.json – create an eslintConfig property in your package.json file and define your configuration there.

    Taken from eslint user guide.

    And they also have order of precedence if placed in the same directory, the priority is:

    1. .eslintrc.js
    2. .eslintrc.yaml
    3. .eslintrc.yml
    4. .eslintrc.json
    5. .eslintrc
    6. package.json

    Then I had to look up how to disable rules for a group of files in configuration files instead of using in-file comments.

    {
      "rules": {...},
      "overrides": [
        {
          "files": ["*-test.js","*.spec.js"],
          "rules": {
            "no-unused-expressions": "off"
          }
        }
      ]
    }

    Taken from the user-guide.

    For my purpose, I changed the no-undef rule to “off”. I also had to figure out how to select all the .js files in the directories. Then I came across this:

    **/*.js

    This RegEx will include all JavaScript Files looking forwards from the location of the eslint configuration file.

    The last step is just to remove all unnecessary eslint-disable comments from every files.

    To make it easy, I just need to use Visual Studio Code search and replace feature.

    by Thanh Nguyen at December 10, 2018 09:09 PM


    Daniel Bogomazov

    Final PRs for DPS909





    Focus

    As I stated in my previous blog entry, there was a feature that I decided to work on for the Focus browser on iOS that I personally wanted to see implemented. I ended up finishing the fix and hope to see it implemented into the release version soon. The fix had to do with how the application was setting the user agent. To fix the issue, I implemented a function to the user agent class that would allow it to persist its desktop user agent state.

    Brave

    I decided to work on another issue for the Brave browser on iOS. This issue had to do with the favourite's view showing a lower resolution icon for some sites. After some digging around, I noticed that the code that they were using had them passing in the favicon manually and then upscaling it to fit the size of the favourites icon view. Since a lot of the codebase is based off of the Firefox application, I decided to take a look at how Firefox implements this functionality. I found that instead of passing in the favicon manually, they were using a function to get it from the website instead. I then changed the code to work more similarly to how firefox was doing it and ended up correcting the error.

    by Daniel Bogomazov (noreply@blogger.com) at December 10, 2018 08:37 PM


    Thanh Nguyen

    OSD600 Release 0.4 Part 1

    This is the first blog post for the final release of OSD600. I’ll talk about what I did for the third and final internal Pull Request for the GitHub Dashboard Project.

    As mentioned in the third blog post for Release 0.3, this fourth Pull Request is responsible for creating mock-up pages to house the Dashboard components and make the page responsive.

    So one of my classmate has created a homepage with login button using React framework and its default start-up page. My job was just to improve upon that to make the site more suitable for our needs and requirements.

    To make the site more responsive across devices, I have made use of Bootstrap framework, adding these lines of code to the main index.html page:

    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">

    after the meta tags in <head></head>.

    https://code.jquery.com/jquery-3.2.1.min.js and

    As for the mock-up pages, I leverage what I learned from WEB422 course at Seneca and implement routing feature for our dashboard, so that the user can navigate to different components displayed on different routes accessed via a navigation bar on home page.
    Screenshot (59)
    The routes are grouped into a drop down list:
    Screenshot (60)
    When a user click on different items on the list, they will be redirected to different component views. I only implemented the routing feature, so I didn’t write any logic for each components, instead I just display what each components are supposed to do:
    Screenshot (61)
    Here is how the component drop down list is implemented, as an unordered list that has links to different components :
    Screenshot (63).png
    Here is how the routes are set up in  App.js:
    Screenshot (64)

    And that’s it for this blog post! Thanks for reading!

    by Thanh Nguyen at December 10, 2018 08:17 PM


    Allan Zou

    Reflection on Open Source Course

    Throughout the OSD600 course, I learned a lot about open source development. What I would count as the most important are learning how to use Git and GitHub. I also learned about how the open source community works, such as the difference between contributors and maintainers. I also learned about automatic code formatting for GitHub, licences, and continuous integration, though I never really used those in class.

    Open source worked mostly as I would expect – some people volunteer their time to work on it for no money, and everyone benefits from it. The part that makes it work is that for every 10000 people who just use open source projects without giving anything in return, there’s 1 person who does end up contributing to it. That means if there’s something popular with 10 million people who use an open source project, there will be about 1000 people working on it. I’m still unsure how people and companies are making money from open source, especially large companies such as Microsoft and Google. After all, they would make more money if they required people to pay them for the software.

    Towards the end of the course, I felt like I hit a dead end with regards to finding open source projects to work on, since I was just looking for things that I could patch up easily, rather than issues that could count as “bigger” for the bigger pull request requirement. Unlike other courses, I felt like this one had a steep learning curve which I didn’t fully meet at the end. Although I’ve kept up during the beginning and middle of the course when we were learning Git and Github and having the 0.1 and 0.2 releases, I definitely fell behind near the end when the requirement for bigger pull requests was implemented. As I’ve said in previous blogs, getting into most open source projects is a difficult and time consuming task, which is why I’ve stuck to projects like Codezilla and 30 Seconds of Code, since although these projects are large, each file is independent of others, and can be quickly understood just by looking at it. Meanwhile, every other open source project that I’ve thought about contributing to but decided not to was because it was too difficult to learn – even more difficult than learning a new programming language! Even when I looked at large open source projects made with Java (my favourite programming language), I felt confused and overwhelmed.

    For example, whenever I would get into a big project, there would be incomprehensible keywords in the code such as “ConfigCache” or strange functions like “ParseGPUDeterminismMode(StartUp.m_strGPUDeterminismMode)”. Those are two examples from Dolphin, a GameCube and Wii emulator which I use, but didn’t contribute to because it was too hard. Each file is connected to the project as a whole rather than being independent, and I expect working on this project would take as much time as a full time job, and take several weeks and the mentorship of someone who already works on the project just to understand the code. A similar problem occurred with other open source projects that I was thinking about contributing to, such as Open Shell (Windows taskbar extension), Visual Studio Code (IDE) , and Code::Blocks (IDE).

    I think the cause of such difficulty is that my other courses have “canned assignments” – simple, write from scratch programs that each student is required to write, and can be expected to be done in a short amount of time. Meanwhile, open source doesn’t have canned assignments, and instead uses real life open source projects. These real life open source projects are far more complex than the canned assignments given to me in other courses. Some people who work on open source do it as a full time job – or have quit their full time job so they have more time to work on open source. Although I like the idea of contributing to open source projects that I use, in the future I will probably just use open source projects while hoping someone else maintains it and I can get a free ride.

    The best thing that came from this course is actually not related to the course material. My professor’s enthusiasm for open source inspired me to open source a program I had been working on: Fimarchive Search GUI, a Java based GUI to search Fimarchive, which is a 6 GB zip file of My Little Pony fanfictions (which I read a lot of). I made it because even though the archive contained all the fanfictions and could be read offline, the website was the only user friendly way to search and filter the fanfictions. My concern was the website may go down one day due to the owners not wanting to maintain it anymore, or if it got taken down due to copyright infringement (fanfictions are in a legal gray area – it’s unclear if they are legal or not). I know that no website lives forever, but a 6 gigabyte zip file can, as long as people are still redistributing it via torrents. My program seeks to emulate the user friendly search of the website by allowing the user to filter by tags, user rating, views, date published, and word count, and order by views, word count, user rating, and date published. The GUI is quite ugly and looks 20 years old, and there is probably a lot of spaghetti code, but it works. I hope that if the website gets taken down, my program will actually become useful, and maybe someone will even contribute to it. A few people have already used it and sent me messages on Reddit, so that’s quite encouraging.

    Just for clarification, I made the program to search the archive, but someone else made the archive itself.

    Screenshot of the GUI program:

    FimarchiveSearchGUIScreenshot

    Link to program: https://github.com/itchylol742/FimarchiveSearchGUI

    by azou1 at December 10, 2018 07:10 PM


    Brett Larney

    Final Thoughts

    For my final week I made two smaller pull requests. With final exams coming up and final assignments due I did not leave a lot of time to make two larger pull requests.

    For my first one I went back to NodeChat. When you chat in NodeChat you see your avatar, and ‘Me’ underneath. I thought that having your username instead of ‘Me’ would make it easier to remember your name, and would be a nice change. Most chat apps use your username and not ‘Me’ so I figured it would be a welcome change.

    This change was fairly trivial, just adding the username from this.state instead of where “Me” was hardcoded in.

    You can view this PR here: https://github.com/OTRChat/NodeChat/pull/71

    My second and final PR was to Github Desktop, fixing a small typo I had noticed while I was searching through files for another PR I had done.

    You can view this PR here: https://github.com/desktop/desktop/pull/6348

    All in all I had a good experience this semester working on open source. I learned that it is actually not that hard to make some meaningful contributions to large projects that I use all the time.

    I also got more confident picking through a large codebase to find the right files and places in the files where I should make my modifications.

    I feel like I have made some decent contributions to different projects and that my contributions were appreciated by the maintainers. I look forward to making more open source contributions in the future.

    by 93brett at December 10, 2018 06:49 PM


    Yuecheng Wu

    Release 0.4 – Another External PR in the Bag

    Hi Everyone, I am back with another Blog post. For the last couple of weeks, since it’s the end of the term, I have been swamped with all those final projects and tests. That’s why this blog post came a little later than it should have. 

    For my last external pull request, I decided that I am going to continue to work on issues from Pandas repo since I had so much fun doing it the last time, and I already have the environment set up. Therefore, I started looking for issues that I could do in Pandas and I found issue #24058: DOC: Fix order of sections in Series docstrings. However, as I was working through the issue, Alex has accidentally fixed it through his pull request ‘Fix error GL07 #24126‘ without realizing it. Therefore, my issue was then closed. Just when I thought I needed to find a new external issue to work on, Alex, a very nice person as he is, offered to help me finding a new issue since he accidentally fixed mine. This is why I love Open Source environment. People are so friendly and helpful. 

    The issue Alex has found me was SS03 (Summary does not end with a period) errors in the generic.py file after running the command: 

    ./scripts/validate_docstring.py --errors=SS03 --format=azure | grep 'pandas/core/generic.py'

    However, the command was giving me some errors that I had to work through. First, it was tell me the ‘grep’ is not recognized as an internal or external command because it only works on Mac OS, not Windows, so one of the members in Pandas suggested to modify my command to work on Windows like this:

    ./scripts/validate_docstring.py --errors=SS03 --format=azure --prefix=pandas.series.

    This should return a subset that I can work with. However, more issues have raised with this command, which is a ‘UnicodeEncodeError’: 

    I tried google this error and nothing could fix it. A lot of them were saying it’s an issue with the Windows. Just when I thought that I couldn’t fix it and was about to give up, I looked at the error again and realized the issue happened in the file cp1252.py, so I decided to take a look at the file to see if I could find anything. I am so glad I did that because I noticed cp1252.py is a file that works with encoding/decoding of documents. It also has a decoding table which does not have character ‘\u2155’! After I added it in the table and ran the command, it was able to generate all errors that I was suppose to see (SS03 – Summary does not end with a period). Yes! Since there is no issue created in Pandas for these errors yet, so I created the issue and said I will be working on it. Here is the issue: DOC: Fix docstrings error SS03 – Summary does not end with a period #24164.

    Fixing these Docstring errors aren’t as easy as they sound. Although all I needed to do to fix the issue was to add a period ‘.’ at the end of the line, the problem was finding where to add it. Even though the errors generated with the command gives a line number, the line number only shows the start of the function where the error occurred, and the actual definition of the function where the summary exists could be in a different file. It took me a while to find a lot of those summaries because Pandas is a big repo and has a lot of files for me to look through. The good thing with Visual Studio Code is that I can use the ‘search’ tool to go through all the occurrences of a particular keyword, but it still took me a while because some functions have multiple definitions in different files and I had to look through all of them to find the missing period. Nevertheless, I was able to find and fix all missing periods. Also, because of this issue, I was able to look at lots of Pandas’ codes and their algorithms which I believe will certainly help me in the future. Here is the pull request: DOC: Fix summaries not ending with a period #24190.

    If there is one thing I can take away from this pull request, is that never give up. I was facing so many issues just to get the command to work, and not to mention looking through all Pandas files just to find a missing period. However, I didn’t give up, I kept pushing and grinding, because I know I will face similar situations in the future as well where nothing seems to work the way I wanted, but if I keep working on it and don’t give up, I can eventually conquer anything that comes in my way.

    by ywu194 at December 10, 2018 06:26 PM


    Hojung An

    Project Stage 3 - Optimization - Re-Visited

    So, while testing different ways to optimize the program using the compiler options I realized that there's some flaw in the methods I used to benchmark and compare the performance with different build versions.

    The initial time used to benchmark had perf overhead, which most likely have affected the actual execution time of the program.

    Also, on top of the compiler options I came across (-fmerge-all-constants, -floop-parallelize-all, -ftree-loop-distribution), I found few more options:
    -Ofast
    **Ofast enables all -O3 options and more by disregarding strict standards compliance. There is some 'risk' with using this option as some enabled options may not be safe/compatible to be used with the codes of the program.

    -mtune=cortex-a57(aarchie)/intel(xerxes)
    **mtune tunes every applicable codes generated to the cpu-type.

    -march=core2(xerxes Intel Core 2)/native(aarchie)
    **march generates instructions specific to the machine with specified cpu-type.

    Based on these following are my build options that I'll be testing

    1) Out of the box (-O2)
    2) -O3
    3) -O3 -fmerge-all-constants -floop-parallelize-all -ftree-loop-distribution
    4) -Ofast
    5) -Ofast -march=core2/native -mtune=intel/cortex-a57

    Command: multitime -n 50 ./bzip2 -c file > /dev/null

    Changes to the test as well. Number of test iterations stays the same at 5 (discarding first 2 for cache warm up).
    I created new random text file that is 300MB in size.
    Also, I used document created in PRJ566 class as a true text file to compare with random text file. This file has texts and images and is 138 pages long, just like the files that would be used in real life.


    !!!Note!!!
    On xerxes, multitime was not available, so I downloaded the file, created a directory and built it inside the directory. The multitime I used on xerxes did not run from usr/bin but from ~/multitime/multitime-1.4/


    Results

    The results are quite interesting.
    On x86_64 the improvement is consistent at 2% but on aarch64 the improvement varies per file, but the improvement on aarch64 averages to 2% as well.
    So it looks like the -Ofast -march -mtune build gives an average of 2% improvement.

    Let's see if I can optimize the codes to get better improvement.

    by Hojung An (noreply@blogger.com) at December 10, 2018 06:23 PM


    Julia McGeoghan

    Working with React and Typescript to create a Newsletter Subscription component

    Common Voice

    I recently made a contribution to a Mozilla application called voice-web. It’s an interface designed to collection speech donations for the Common Voice project; an initiative to make voice recognition technology non-proprietary and open source.

    An image from the voice-web english homepage.

    Motivation for Contributing

    I decided to contribute to this site’s front end because I’ve been meaning to get more familiar with both React and Typescript. I also noticed that the site design was overseen by a professional designer and I was excited to get more experience working with/from with comps vs. something like bootstrap.

    What I made

    By following the specs and requirements shown in the initial issue, I eventually ended up with the following:

    https://medium.com/media/fed0d38ebef884b7c6c38ac8b36d6b5d/href

    This component was fairly simple, but to implement it I needed to adopt the methods used by the project as a whole. Because of this there were a quite a few learning curves for me to get over. The purpose of this post will be to go over some of what I learned in overcoming them, linking to resources that readers might find useful along the way. If you plan on working on Typescript + React some of what’s covered here might help you solve certain issues in the future.

    Function vs. Class Components

    I never worked with Typescript + React before this project, so when I saw the following I got confused and encountered my first hurdle:

    Example of a functional component in the project

    Combining React with Typescript means that fields will become statically typed. So when you’re passing state or properties into a functional component the data type of parameters will be stated before you reach the function implementation. A functional component without typescript might be easier to implement at first, but by not having type checking you can increase the number of bugs in your application as a whole. Type checking also has the advantage of making the code more readable, as you’ll immediately understand the type of parameters/return values in a function, it removes some guesswork.

    Example of a simple function component without any Typescript.

    Class components are a bit more complicated. With Typescript your React components state and property fields will be defined in interfaces, which can in turn extend other interfaces. Ogundipe Samuel from LogRocket explained them well in this post, stating:

    One of TypeScript’s core principles is that type-checking focuses on the shape that values have. This is sometimes called “duck typing” or “structural subtyping”.
    In TypeScript, interfaces fill the role of naming these types and are a powerful way of defining contracts within your code and contracts with code outside of your project.

    It wasn’t until I read this that I began to make sense of the function and class components above. For another more in-depth understanding you should check out this post as well.

    In React it’s best to keep as many components as possible stateless. However I knew that for my Email Subscription component I’d want to keep track of its state; at a minimum I’d need to keep track of whether the component’s email input was valid or not so I could display certain styles or errors. So, needless to say I ended up creating a class component.

    But a lot more needed to be added at this point. Since my component was essentially a form it needed to become a controlled component.

    Controlled Components

    In React, controlled components are recommended for implementing forms. Controlled components handle form data through the component’s state, while uncontrolled components handle form data in the DOM itself.

    For example, for my html email input, I had a property called onChange that would trigger a handleChange()function. This function would then set the components state using this.setState().

    Input with an onChange function call to handleChange()
    handleChange function that assigns the inputted value to the state directly

    Adhering to this practice ensures that the component’s state is the ‘single source of truth’. Also, with an Uncontrolled component you may run into a issue withrefs. As Manjunath in this article states:

    React recommends using controlled components over refs to implement forms. Refs offer a backdoor to the DOM, which might tempt you to use it to do things the jQuery way. Controlled components, on the other hand, are more straightforward — the form data is handled by a React component. However, if you want to integrate React with a non-React project, or create a quick and easy form for some reason, you can use a ref.

    Error Assigning to State

    When I first built my component and was attempting to assign to state, I would get an annoying error that didn’t make any real sense to me at the time:

    Cannot assign to ‘state’ because it is a constant or a read-only property
    A good example showing the cause of the error.

    Looking at my code, nothing about it screamed ‘read-only’ to me, so I ended up relying on google to help me solve this one. Initially my search didn’t help me much, and at first lead me to believe that I ran into a possible regression error.

    However as it turns out, the change was intentional. DefinitelyTyped — a file for integrating Typescript type checking into programs — created a new patch that changed the code’s behavior.

    In an effort to improve type safety and prevent error prone code through accessing an undefined state, DefinatelyTyped made it so that a compile time error was created if a state variable was accessed but never initialized.

    This code demonstrates the problem. Notice how this.state.who is never initialized but still accessed; before the patch this would create a runtime error.

    In addition to that assigning a value to this.state would throw an error if done in a constructor. This is because it’s bad practice to access the state in a constructor, since it could cause potential problems with mutability. So now whenever the state is accessed in such a way, the error:

    Cannot assign to ‘state’ because it is a constant or a read-only property

    Is shown during compile time.

    There are different approaches for fixing this. For me, I was able to solve it by creating an interface for my state…

    Using that as my state type in the component, then defining state fields in a single object set on state, within the constructor.

    Final Review

    I got a fairly thorough review from one of the main contributors from voice-web after I made a PR for my changes. I decided to add a section here based off a couple things that were mentioned.

    This touches on an issue in another project I was working on, Github-Dashboard. Developers were having css conflicts between pages because every css file imported into the project was having its selectors applied globally. There are many solutions to this problem, as outlined here. However here it was recommend that I namespace the selectors in sign-up.css .

    Notice what was mentioned about constructors at the end, and how he mentions that they could be avoided. There’s a chance this is being recommended because the constructor adds redundancy by setting props, and can lead to mutations in state if not handled properly.

    If you see on line 35 here I set a component in my email subscription component’s state. But why is this a bad thing to do?

    As mentioned in the React docs:

    State should contain data that a component’s event handlers may change to trigger a UI update…adding redundant or computed values to state means that you need to explicitly keep them in sync rather than rely on React computing them for you.

    State should have the simplest possible representation, meaning that boolean flags should be used whenever possible instead of more complex values. Render() should compute data based off the state.

    Conclusion

    I think when I try to learn new technologies this will be the approach I take from now on, for the most part. Before my standard process would be to create small, personal, standalone projects that would practice what I’d have read in the documentation or some guide. But by contributing a feature to something already established by seasoned developers I have been able to adopt their good habits fairly early in my own development. It was a great experience and I learned a lot from this.

    by Julia McGeoghan at December 10, 2018 07:10 AM


    Volodymyr Klymenko

    Code refactoring is an unavoidable step in Software Development

    Photo by Mahesh Ranaweera on Unsplash

    I continue working on a Creative-Collab app. Introduction to this project you can find in this post:

    Creative-Collab: Open Source web app for creative collaborative writing

    I was writing some unit tests, and I found a gross error in the JSX code of the React App. One of the components had been using class property in its HTML elements instead of className inside render() method of the React.Component. Once I saw it, I understood that there is a need for code refactoring.

    I filed an issue and assigned it to myself. Not only did I want to fix that error, but also I was going to go through the code and look for other opportunities to improve it.

    It’s been a while since my last contribution to the project, so I had to familiarize myself with the new code. To be honest, I felt like the project’s codebase is totally new to me even though I was doing some code reviews for pull requests.

    I replaced all the class properties with className. Next, I started going through the other files, and I didn’t find any gross errors there. The only thing that confused me was the naming of a CSS file, so I renamed it. Then, I committed my changes and opened a pull request.

    I believe that code refactoring should take place after completing a milestone in software development because it helps to improve the code quality and maintainability of the code. From my point of view, it’s vital not only in small closed source projects but also in open source projects that are growing.

    In addition to that, I think that the presence of unit tests could help to catch the class/className problem in this project. Anyway, it was a good lesson.

    by Volodymyr Klymenko at December 10, 2018 06:22 AM


    Victor Kubrak

    Final Post

    Open Source was my first professional option course. I took it because

    • I heard from person working in the industry that open source knowledge/skill is important and that I must take this course
    • I heard from other students that this course is taught by a good professor
    I am satisfied with this course. I learnt GitHub, gained some experience in open source, improved resume, and so on. Now I am going to try other courses. In the last year at Seneca I will definitely take second open source course.

    by Victor Kubrak (noreply@blogger.com) at December 10, 2018 06:08 AM


    Ryan Vu

    DPS909 — rc 0.4 — Week 1

    DPS909 — rc 0.4 — Week 1

    For the first week of RC 0.4, I was about to keep working on the internal project from RC 0.3, but unfortunately the project seemed to be inactive. As I mentioned in the previous blog, this PR was kinda important, however it still has not been merged or even fully reviewed. So I had no choice but to find another project to work on.

    GitHub-Dashboard

    The new project I changed to is a web application which summarizes Github user’s information and activities in a pretty layout. First impression of the project was that it was well-organized and well-oriented. The structure was already formed, and there were prototypes, clear goals to work on.

    Example of a mockup

    The core concept

    Basically the app will get information from Github API, then parse them and put into well format. However I acknowledged that the project already used another 3rd party library called octokit for that, which was really convenient.

    The first issue on this project I was about to work on required to create a component. Taking a look at the code, I saw Bootstrap CSS was included. And since it was using React as the framework, so why not use a component library which combine both Bootstrap and React? react-bootstrap was obviously the option.

    That was all the preparation for the following week.

    Link


    DPS909 — rc 0.4 — Week 1 was originally published in pynnl on Medium, where people are continuing the conversation by highlighting and responding to this story.

    by pynnl at December 10, 2018 04:33 AM


    Victor Kubrak

    Release 0.4 Week 3

    This week I finished all PRs that were assigned to me last week:
    Description of these functions can be found here
    Third PR does not pass checks, because it uses function written in one of the previous PRs which was not merged yet.

    by Victor Kubrak (noreply@blogger.com) at December 10, 2018 03:06 AM


    Adam Pucciano

    TraceLabs – CTF

    This month I officially signed on to volunteer my time to TraceLabs, helping to develop and run their Capture the Flag scenarios. However, the CTFs TraceLabs operate are not your usual hacker-con type events. Instead teams will use their network knowledge and reconnaissance techniques to help unveil the where-abouts of missing persons. This information is then bundled and handed back to law enforcement agents.

    The platform that TraceLabs uses to do this is part of an open project. CTFd.

    This is a standard, customization CTFd platform written in python, than runs both a database and a web interface to keep track of the event progress, team scores and posted challenges. It also has some plugins to SQLAlchemy, Docker, and a few other technologies. I highly suggest looking at their repos Wiki. 

    During events, TraceLab admins want people to be able to submit data freely to then be scored at a later time as the event goes on. This also gives admins the ability to adjust the flag score based on the information provided. Since we are dealing with real missing persons cases, the admins scrutinize each submission very heavily. In most cases this process may take time, so rather than the fixed FLAG to achieve, submissions are saved and it gives teams the opportunity to continue the search with their new found results.

    Again, if you are familiar with any of my last blogs – I do not take well to Python. (To me, it just feels like everything comes from a magic black box). Regardless, I signed up to help, so this was a great way to push me forward.

    My first step was to see what exactly was going wrong with the current repo.

    It turns out the guy who had been ‘maintaining’ this code has somehow vanished. Perhaps we should put him on the list for the next conference. I guess that is what you get with volunteer open source projects. People can come and go as they please, and you don’t have much control over it. I do sincerely hope he is okay though.

    Next I investigated the newly minted 2.0 version of CTFd (main repository). Although it still did not feature the changes we wanted, after checking it out, it looked to be a nice upgrade with a lot more ‘nice to haves’ and great visual perks. It’s also really nice to stay close to the most current code base so I decided to try and recreate the intended changes on the new version.

    At first this was really intimidating – there were a lot of files that were named the same – but in different directories. I thought to myself: ‘somehow – all of this makes a web page… with the look of those MVC bootstrap no less’. I tried to Ctrl-F familiar text I’ve come across while testing the application. After spending some time playing around with the  files I managed to see where challenges were tested, and set that logic to always be true, thus submitting the ‘flag’ but setting the status to ‘incorrect’. This allowed the application to take on submissions without authenticating them against any flag that the challenge had to have set. I further studied the html to js to py workflow from earlier, and managed to get a working ‘patch’ service that, on an admin view page – would update the submission to ‘correct’ with a simple click. By taking the delete function as a template I managed to figure out how to do this. It was a markup button, that with the help of assigning a class, was picked up by JavaScript click function. The JavaScript would then throw a route call to some defined name in somemodel.py under that route and parameters, in this case the submission id. Perfect. I was starting to feel more in my element again! I also did what every good programmer does from then on, copy paste some more. :).

    famiiarFunctions look awfully similar…

    After patching together a version that just changed the submission type, I wanted to dig deeper to see if I could actually mark them as solved. Again I went searching and borrowed some the previous code that I had changed in order to submit Fails. It looked like the application back end separated submissions further by categorizing both a Solves, and Fails table. Thus, I decided to modify my patch to delete the submission and use that data to create a new solve, linked to that team.

    It looked like dynamic challenges allowed for more than one solve, so that handled any issues having to put in work for this. Otherwise – I was considering making some changes to the models and database schema. Standard challenges only allowed one submission, so I will have to talk with the team and make sure this can work as is for our events.

    The last issue was to allow admins to allocate awards on the fly. After all my previous experience, this part felt pretty easy. I planned to create an identifiable value in a number input field, pass that along while clicking ‘Mark As Answer’ on the row, and then commit a Awards object to the database with that particular value.

    This took a lot more work to figure out than what I had planned. Other object keywords were coming in to the picture, and I was trying to deduce how they did so. I was immediately drawn to the header of the file, where many imports were declared.  This allowed me to use other object types to handle a lot of the heavy lifting, creating and committing changes to the database.

    headersWhere the magic happens

     

    The major update so far to this platform was the ‘patch’ action to submissions. I hope to refine it further once my understanding of these imported objects grows a bit more with this project.

    A look at the ‘patch’ command, lets see where I got all these snippets of code from;

    @admins_only
    def patch(self, submission_id):
    #Some database transaction syntax that I got used to seeing. Worked after finding
    #the proper import file
    submission = Submissions.query.filter_by(id=submission_id).first_or_404()
    challenges = Challenges.query.filter_by(id=submission.challenge_id).first_or_404()
    challenges.value = challenges.value – 1
    #added after I got solves working, I started getting confident looking at the models
    #and filling in data like the other objects had been
    awards = Awards(
    user_id=submission.user_id,
    team_id=submission.team_id,
    description= submission.provided,
    value=1,
    category= submission.challenge_id,
    )

    #this was the method before adding solves where I just changed the type

    submission.type = ‘correct’
    #from import log I copied from challenges.py after seeing it come up a lot when I #first played with the submit
    log (‘submission’, “[{date}] {name} submitted {submission} with TYPE {kpm}, Challeng ID {tpm} “,
    submission=submission.id,
    kpm= submission.type , tpm= submission.challenge_id
    )
    #solves object made as I’ve seen it before, I modified the data to take in the current
    #submission data and save this as a solve to the database
    solve = Solves(
    user_id=submission.user_id,
    team_id=submission.team_id,
    challenge_id=submission.challenge_id,
    ip=submission.ip,
    provided=submission.provided
    )
    #adopted after seeing delete function below this one
    db.session.add(awards)
    db.session.add(solve)
    db.session.delete(submission)
    db.session.commit()
    db.session.close()
    # some return, just left it like this 😛
    return {
    ‘success’: True,
    }

    I really feel like I learned a lot by tackling this project over the weekend. I definitely feel that python can be less scary, and have finally unveiled all the magic syntax behind all its import capabilities. I also have a lot of new found interest in making this project better than it was. I want to add so many features now that I am getting the hang of how everything is put together, from UI to DB.

    The next steps in development are to secure a general github repository, so that no one person is in charge of  maintaining the repo. In addition to this, it would be useful to see what plugins might be useful for this application. Lastly, over the holidays we will be doing lots of testing of each of the functions, and perhaps writing new tests to ensure nothing is broken in the new custom submission workflow.

    Please check out TraceLabs.org for a great upcoming local event on January 26th, as they will be holding Missing Persons CTF at York University, Toronto. York University and Seneca College students are welcomed to come and join in on the hunt!

    If you’d like to see the project’s progress, you can check out my github!

    Thanks for reading,

    A

    by pooch11 at December 10, 2018 02:15 AM

    December 09, 2018


    Yeonwoo Park

    Portfolio Generator – Add Continuous Integration tool

    In OSD600 course, we learned about Continuous Integration. According to the definition, Continuous Integration(CI) is the process of building and testing of the code whenever it is changed. The big advantage of it is that we can choose operating system and the version of the compiler or framework which will be used for testing, so that the developers can make sure that the latest code is working properly. Also, it is easier to find and fix the issues since it is showing where it breaks the tests. There are a lot of continuous integration tools, such as Travis CI, Azure Pipelines, or Jenkins. I decided to add Travis CI in the project Portfolio Generator.

    First, I looked for the Travis CI tutorial to start(See the link). After the authorization of Travis CI, I chose this project repository for Travis CI usage. After that, I added .travis.yml file for Travis CI specification running on virtual machines.

    language: csharp
    mono: none
    dotnet: 2.1
    script:
      - dotnet restore
      - dotnet build ./Portfolio-generator-console/
    

    There are few configurations I have to specify for this project. First, the language is set to C#(csharp). Travis CI does not test .NET Core application by default, so I had to specify it by turning off the mono setting, and add dotnet version. Also, add more commands for .NET Core project, such as ‘dotnet restore’ for restoring all dependencies in the project, and ‘dotnet build’ for building a project with dependencies. Here is my PR for adding Travis CI configuration. After adding .travis.yml in the project repository, I can see the status of testing on the travis ci website. Currently, there is no error when Travis CI runs. It is because there is no other configurations or tests on the project yet. We will add unit tests on the project and test it in Travis CI.

    I was surprised that the basic configuration of Travis CI was easier than I thought. Once I am more familiar with Continuous Integration tools, I will add more configurations into this project, as well as other projects I am working on.

    by ywpark1 at December 09, 2018 07:27 PM


    Huda Al Dallal

    My overall experience – open source

    The past few months have been great, scary, great, struggles, great, successful!

    I can honestly feel that I have learned A LOT from this amazing open source class that I took at Seneca: OSD600, with one of the best and supporting professors, David Humphrey. I would never have expected to work on these amazing projects, test out start up projects, work with amazing developers and other students.

    The biggest thing I have learned other than the value of Open Source is GitHub, I have learned a lot about the usage of it and I think I am familiar with it a lot more than I will ever be before taking this class. I loved the opportunity I had in participating in Hacktoberfest, will definitely participate again next year, even though I will not be taking this class anymore.

    I’m also happy that we got the opportunity to start a new project and work on them along with other students in the classes. It honestly gave me pride and a sense of success knowing that we have started something, and they are something that will be useful for others.

    There isn’t much I would change about this amazing class! I think many students will agree with me that this is like a mini internship, you get the real hands on experience with the help of the professor.

    I have always liked the idea of Open Source and I like it even more after experiencing it for a few months. I just love the idea of ‘free for people’ there has been so many good use with Open Source and I will for sure continue on with a bunch of these amazing projects I have worked on and will continue blogging despite the open source class that is ending in a few days…

     

    by Huda A at December 09, 2018 09:28 AM