Planet CDOT

January 17, 2017


Len Isac

Build process for cflow & cadvisor Linux open source projects

I chose the cflow and cadvisor Linux open source projects for documenting the package build & installation process and listed some of the things I encountered along the way.  Both package installations were done on a Fedora 25 virtual machine.

cflow

An open source project licensed by GNU that charts control flow within source code.

Build & install steps:

  1. Downloaded cflow-1.4.tar.bz2 from http://directory.fsf.org/wiki/Cflow ‘Download’ link
  2. Unpack tar file: tar xvf cflow-1.4.tar.bz2
  3. Change to install directory: cd cflow-1.4
  4. Create make files: ./configure
  5. Compile package: make
  6. Switch user to root: su root
  7. Install programs/data files/documentation: make install (make install is recommended to execute with root privileges)
  8. Verify that installation completed correctly: make installcheck

All 21 tests were successful.

Testing newly installed cflow software:
  1. Using whoami.c sample from the cflow manual: https://www.gnu.org/software/cflow/manual/cflow.html#Quick-Start
  2. Once the c file is created, run: cflow whoami.c

Output:

main() :
    fprintf()
    who_am_i() :
        getpwuid()
        geteuid()
        getenv()
        fprintf()
        printf()

cflow package installation was successful.  No extra dependencies were required during the installation process.  I tried this software with C++ code and it also works very well.


cadvisor

This project is licensed under the Apache License Version 2.0 and provides resource usage and performance characteristics for running containers.  It also has native support for Docker, an open source project for containerization.  Here is a previous blog of mine on Docker.

Github open source code link: https://github.com/google/cadvisor

Required dependencies:

GO language – open-source programming language initially developed by Google.  Installation instructions on Linux can be found here: http://ask.xmodulo.com/install-go-language-linux.html (instructions for both Ubuntu and Fedora are included).  Total download size is approximately 49M.

Once GO language is installed (cAdvisor requires Go 1.6+ to build), I followed the build & testing instructions here: https://github.com/google/cadvisor/blob/master/docs/development/build.md

At this time, I have installed go version 1.7.4 linux/amd64.

Issues:

After running ‘make build’ from the $GOPATH/src/github.com/google/cadvisor path, the cadvisor build fails:

/usr/bin/ld: cannot find -lpthread

/usr/bin/ld: cannot find -ldl

/usr/bin/ld: cannot find -lc

So I tried running ‘make test’ for only unit tests – all existing test files passed ok.

I’ve tested gcc with another c program using the -lpthread argument and it seems ok.  I’m looking further into why these three libraries are not building properly – will update once it has been resolved.


by Len Isac at January 17, 2017 03:02 AM

January 16, 2017


Matt Welke

Continuing Work on Displaying the Logs

I don’t have anything too exciting to report right now. I haven’t finished the front end code for the new error log display yet. I found it a bit difficult to jump in and know what was going on with the VAT code we have now, since it was my team mate who converted it to a different front end JS framework while I did other things earlier. So first I need to get familiar with the redux paradigm before I can confidently contribute to that code. I am slowly getting comfortable with the way Redux and other Flux implementations work, so I believe I should have that work done soon.


by Matt at January 16, 2017 09:24 PM


Igor Naperkovskiy

React JavaScript libraby

I decided to do my post on React. It is a JavaScript library that helps developers build responsive user interfaces. It is created and maintained by Facebook, but is open to an open source community to help develop more robust libraries that can then be used to build UI. It is almost 4 years old, and was released in March 2013. It is currently in use by many big companies like Netflix and Airbnb. The reason it caught my attention is that I believe that JavaScript is a future of web development and any website if wants to be dynamic will have to implement JavaScript in it. I have always wanted to learn React and I believe by contributing to their open source community will help me get a deeper understanding of this technology.


by naperkovskiy at January 16, 2017 02:33 AM


Andrey Bykin

SPO600 Software Install Comparison

The difference in installing software from GNU license and my pick MIT license was not much. Both provided a well made makefile, however while installing httptunnel I was provided a configuration file, which helped create the makefile for the software.  For the other license I choose a software named streama which is based on MIT license. The process for both was pretty much standard. Go and get the binaries and run install that comes with the packages. If you want to learn more about the two softwares you can get a better look at them here :

streama : https://github.com/dularion/streama

httptunnel : https://www.gnu.org/software/httptunnel/


by Andrey Bykin at January 16, 2017 01:22 AM

January 15, 2017


Margaryta Chepiga

Lab 01 – OptiKey Open Source Project

OptiKey is an assistive on-screen keyboard which runs on Windows. It is designed to be used with a low-cost eye-tracking device to bring keyboard control, mouse control and speech to people with motor and speech limitations, such as people living with Amyotrophic Lateral Sclerosis (ALS) / Motor Neuron Disease (MND).(OptiKey)

According to the OptiKey developers, the project was created in order to challenge other similar products in the market. With the difference in mind, that OptiKey is fully open source and free.

OptiKey is around 10 years old as a project. Here is the link to their wiki page which has users guides and other useful information about the project. The project is written in C# and designed to run on Windows 8/8.1/10. Here is the ideas of features and improvements that according to the author could be done:

  • Add support for more languages (dictionaries, as well as localizing OptiKey)
  • Add next word suggestions
  • Support for accessible buttons, sip/puff tubes, brain potential detection, and lots of other human interfaces

by mchepigaosd600 at January 15, 2017 11:42 PM


Eugueni Antsyferov

OSD600 Lab 1

The project is called Simbody.
Simbody is about studying articulated mechanisms through modeling motion in generalized/internal coordinates in O(n) time.
It is 12 years old and started Jul 24, 2005
https://simtk.org/projects/simbody is the main website for Simbody and has a lot of information about the project.
Simbody is it written in C++.There are 90 Issues and 33 people have contributed to the code. Professors and Students of Biomechanics are creating and altering animations of human, animal, mechanical and robot structures in the porject.


by Genya Blog at January 15, 2017 02:04 AM

January 14, 2017


John James

SPO600 Lab 1

I’ve looked into few open source projects and how the review someone’s submission for new code to be entered into the project. The two projects I looked into are Open farm and teammates, both projects have entirely different goals and objectives. One is a web platform to act as a Wikipedia to grow crops for gardeners and farmers, while the other is a peer evaluation tool.

Open Farm follows the rule of “Better done, than perfect!” saying they don’t expect the best code out there, but as long as it fulfills the purpose it will be added.  Another thing about this open source is they have their own code of conduct https://openfarm.cc/pages/code_of_conduct?locale=en..

Now with the other project Teammates, its community  seem to prioritize on help students get into the open source world “One of the main objectives of TEAMMATES is to help students get experience in a OSS production environment.” Which is a very useful thing for me at least. They provided an Orientation task list to help people start to become contributors. A) Know the project, Review the code and try to understand what everything is doing. B) Set up locally, Set up the application within your computer. You might have to do a lot of testing. C) Tinker with the code, work with the UI or try to solve issues that have been reported. D) Introduce yourself to the community, by creating new issues that you see in the code. while providing a link to york app and allowing it to be forked. But also introduce yourself, and why you want to be a part of these developers. E)The final step is to Start contributing after you complete your first task, the doors will open to you.

Both of these open source projects take a different approach on how to submit code, but I can see advantages and disadvantages to these approaches. With Open Farm, You really take a hit on performance if you allow code that just works allowed into the main part of the project, sure it can be nice to have the functionality, but you should still encourage to make code more optimized in the end and with TEAMMATES, It’s a safe environment for students to learn how the open source world runs  and provides a lot of guides for how people can start to develop for them. I really like this approach because it builds a stronger foundation for students.

Here are the Github to both projects

OpenFarm – https://github.com/openfarmcc/openfarm/

TEAMMATES – https://github.com/TEAMMATES/teammates

 

 


by johnjamesa70 at January 14, 2017 08:55 PM


Len Isac

Docker – application container engine

This is my first blog for OSD600 (Open Source Development) at Seneca.  As one of my first tasks for analyzing an open source project, I’ve chosen the docker project (https://github.com/docker) which currently has 100 contributors with just under 2000 open issues being worked on by the GitHub community.

Docker is an application container engine that is hardware/platform independent, meaning it is capable of packing, shipping and running any application on any type of hardware or platform as a “lightweight” container (standardized unit); no particular language, framework, or packaging system is required.

Containers are considered a better/faster distribution method compared to setting up a virtual machine, since the container does not require a full copy of the operating system, giving it a much faster startup time (minutes to seconds).  Here is an interesting thread on Quora (https://www.quora.com/What-is-the-difference-between-containerization-Docker-and-virtualization-VMWare-VirtualBox-Xen) detailing some of the advantages of containers over VMs and also simply going over some of the main differences between the two.

Docker was released as open source in March of 2013.  The official website can be found here (https://www.docker.com).

One of the major milestones for this project was on September 19, 2013 when Docker announced their major alliance with RedHat which made it possible for Fedora/RHEL compatibility, as well as Docker becoming a standard container within Red Hat OpenShift.  Another major open source project and current leading open source automation server Jenkins (https://jenkins.io/) has already had many plugins developed by the community dedicated to Docker compatibility (https://jenkins.io/solutions/docker/).


by Len Isac at January 14, 2017 08:33 PM


Peiying Yang

An amazing framework -AngularJS

AngularJS is base on JavaScript. You can build your web in an easy way. It is a front-end web framework and Google is maintaining it.

AngularJS provides many features. One of the most notable features is two-way data binding.  It can improve users’ experiences when they fill forms.

%e6%8d%95%e8%8e%b7

 

%e6%8d%95%e8%8e%b7

 

https://angularjs.org/


by pyang16 at January 14, 2017 07:32 PM


Simon de Almeida

Saving shsh blobs for downgrading an iDevice

Have you ever wanted to revert to an older iOS/tvOS firmware? With every major iOS/tvOS update, older gen devices always suffers the same issues. Battery drain. The device becomes slower. Some features are removed (Like swipe to unlock) and so on. The most annoying part is that if Apple stops allowing older firmwares to be installed (Aka: Apple stops signing older firmwares), you are stuck on that unwanted firmware “forever”. Unless of course you saved your shsh blogs 🙂

tsschecker is an amazing tool written in C that allows you to save your shsh blob while apple is still signing them. With these blobs, you might have the ability to downgrade your device using “prometheus tool” or any other downgrade tool.

tsschecker isn’t the first app on the market that allows you to save your shsh blobs, but it’s been around since Dec 2015. It currently has 6 opened issues and around 7 people who contribute to it (@tihmstar being the project owner).

Why not contribute? It’s an amazing tool that protects you from future headaches!

by simon66 at January 14, 2017 04:09 PM

January 13, 2017


Dmytro Sych

Server-Client connection headache – SOLVED

For anyone who has tried to write their own protocols to connect  Server and  Client, the pain of having to constantly maintain your protocol on both sides is real. Introducing new features to your protocol or trying to implement a backward compatibility could prove to be a serious ordeal.

Well, as always, Google got you covered! Google has created this awesome technology called Protocol Buffer. This software allows you to make a faster and more efficient transmission over the wired connection.

The initial work on the first version of Protocol Buffer called Proto1 began in early 2001. However, work on Proto1 had been seized after development team at Google decided that the code was too messy. Eventually, Proto1 evolved into Proto2 which took some of the features from Proto1, as well as making the code more readable. Recently, Google has released new version of Protocol Buffer called Proto3. The list of some of the new features that were added includes extended language support (C++, Java, C#, GO, Python), cross platform compatibility and backward support.
One of the most coolest feature among the others is the ability to modify your protocol without a need to adjust both side of the transmission.

All in all, I am very exited of this new tech and is very eager to try it myself. Check out their Git repository (520 Issues, 230 contributors and 14,700 stars): https://github.com/google/protobuf
and their website : https://developers.google.com/protocol-buffers/


by dsych at January 13, 2017 11:42 PM


Mark Anthony Villaflor

The Microsoft Bot Builder

The Microsoft Bot Builder SDK is one of three main components of the Microsoft Bot Framework. The Microsoft Bot Framework provides just what you need to build and connect intelligent bots that interact naturally wherever your users are talking, from text/SMS to Skype, Slack, Office 365 mail and other popular services. The project is very young. The first commit that has been made in GitHub was on  February 28th of 2016. The project is posted on GitHub – Microsoft and for related link would be in  Bot Framework.

The project is written in C# language. Since it was posted in GitHub, it has 1,904 issues where 1,738 has been closed and 166 issues are still open. There are 64 contributors in this project. Developers writing bots all face the same problems: bots require basic I/O; they must have language and dialog skills; and they must connect to users – preferably in any conversation experience and language the user chooses. The Bot Framework provides tools to easily solve these problems and more for developers e.g., automatic translation to more than 30 languages, user, and conversation state management, debugging tools, an embedded web chat control and a way for users to discover, try, and add bots to the conversation experiences they love.


by mavillaflor at January 13, 2017 08:22 PM

Introducing myself

Hello Internet! My name is Mark Anthony. I am in my 5th semester in Computer Programming and Analysis at Seneca College. I like programming in PHP, SQL, HTML, CSS, C++, and C# languages.


by mavillaflor at January 13, 2017 07:25 PM


Rahul Gupta

Blog Post 1 – Guava: Google Core Libraries for Java

Guava provides a core set of libraries for java which includes some of the latest and most commonly used one including graph library, functional types, hashing, primitives, APIs/utilities for concurrency with the new collections of multi-map and multi-set and much more.

It is definitely a great asset for beginners and other developers looking for high profile projects such as graphics and other functional types. It lists all the libraries with their functionalities so that it is easily understandable and fairly easy to implement as well.

Guava initially started on Sep 13, 2009 and has a good documentation on Wikipedia. It is implemented using java and currently has 670 open issues. So far 93 people have contributed to this project and currently it has 670 open issues. It helps to provide best coding practices and helps to reduce coding errors.

https://github.com/google/guava

Guava Example –

capture1 capture2

by rahul3guptablog at January 13, 2017 06:45 PM


Oleg Mytryniuk

Design your own VR world

A couple years ago, only a few people could believe that the Virtual Reality technology can become an important part of our life, but, nowadays, after The Oculus VR and HTC Vive hit the market in 2016, we can definitely say that the Virtual Reality era is coming. More and more programmers have been involved in the technology development recently. Simultaneously, the number of Open-Source VR projects has also increased and many of them are very interesting, as for example – GearVR Framework Project.

The GearVR framework(GearVRf) is an Open Source VR rendering library for application development on VR-supported Android devices. The library can be used by developers to create their own VR applications. Creation of different models, colouring or making scene objects visible – are some of the features that are available for you in GearVRf. Use them to design your own VR world. The framework has Java interface and you can use Android Studio to build environment and to prototype rapidly.

For your information, the GitHub community has a few examples that are very helpful for creating your own apps.

The project started in 2016; however, being a pretty new project, it involves many programmers who contribute to the framework development. So far, there are 42 contributors to the project, why would not you be the next? GearVRf is written in C++ and if you are really interested in the topic and have programming experience in C++, there are 77 open issues you are welcome to work on making your contribution to VR development.

Or maybe you want to import, build, and run your own and sample GearVRf applications in Android Studio? It is up to you!
The project is well documented and you can find basic information about GearVRf on the Project’s GitHub page. For detailed information, including documentation – please refer to the Official Web Page of the project – https://resources.samsungdevelopers.com/Gear_VR/020_GearVR_Framework_Project


by osd600mytryniuk at January 13, 2017 06:19 PM

About Me

Hello everyone. My name is Oleg Mytryniuk. I am really interested in different technologies and that’s why I study computer programming at Seneca College, as well as have OSD600 course that seems to be very interesting in terms of learning more about technologies and being able to make a contribution to different interesting projects that can be found on web.

So far, I see that mobile app development as the most interesting field for me to learn. Among languages I prefer Java and i would really like to work on an open source project based on that language.

OSD600 seems to be a very interesting course. Currently, there are many famous open source projects, such as Linux, Firefox and I feel like I will really enjoy the course and learn a lot.


by osd600mytryniuk at January 13, 2017 06:18 PM


Tony Park

OpenVR in ValveSoftware

This open project is called OpenVR and here is the link : https://github.com/ValveSoftware/openvr

OpenVR is an API and runtime that allows access to VR hardware from multiple vendors without requiring that applications have specific knowledge of the hardware they are targeting. This repository is an SDK that contains the API and samples. The runtime is under SteamVR in Tools on Steam.

This project is released at Apr 30, 2015 initially so it is less than 2 years and not only github community above, but also https://steamcommunity.com/steamvr or  http://steamvr.com are active for this project.

The main language is  Cplusplus and 182 is issued in the gibhub and more than 500 people are contributing to this project.

Overall, This is one of the most popular OpenVR APIs so this is used on VR industries without relying on a specific hardware vendor’s SDK.


by tonypark0403 at January 13, 2017 06:10 PM


Jerry Goguette

Protocol Buffers – a quick look

fexpdv001o4-anas-alshanti

Theirs this interesting open source software I’ve seen on GitHub. It’s called Protocol Buffers – Google’s data interchange format. Protocol Buffers are used by Google for the serialization of structured data. In addition, it’s language-neutral and platform-neutral.

Protocol Buffers (a.k.a., protobuf) was first developed in Google starting in the early 2001, and then further developed over the years. ProtoBuf has been used in many projects such as Cafe. You can find more information on Protocol Buffers by clicking Here

ProtoBuf is written in C++, C#, GO JAVA PYTHON, RUBY and JAVANANO.
It currently has 513 open GitHub issues that need resolving. It also has 230 contributors to date.

Overall ProtoBuf seems like a very cool project and i hope to see it continue to grow and improve.


by jgoguette at January 13, 2017 06:06 PM


Max Fainshtein

Lab1-streama

streama is a open source project which describes itself as a self-hosted Netflix. The purpose of the project is to allow users to post their own videos whether its home videos or downloaded shows. The project started on July 29th, 2015 and on the time of writing  January 13th, 2017 Streama’s github page has 88 ongoing issues. The application is written in JavaScript and has 21 contributors. This application is designed so that a user can create a collection of videos and view them as well as have the ability to share access to their saved shows,movies, and videos with others.


by mfainshtein4 at January 13, 2017 06:06 PM


Theo D

Blog Post 1 – DPS909

Hello,

My name is Th3o. I’m a student in the BSD program at Seneca College.

For the first blog I chose osquery: https://github.com/facebook/osquery

os

Osquery allows you to explore operating system data by through the use of SQL based queries. It can display running processes, loaded kernel modules, open network connections, browser plugins, hardware events or file hashes.

It was start July 27 2014 by Facebook. The software is open source, obviously, and is multiplatform (macOS, Linux, centOS). More information can be found at: Osquery Blog

The program is written in C++, and executes SQL commands. The project currently has 81 open issues, and 1,054 closed issues. There are 126 contributors currently on the gitHub project.

Some Examples of the Project’s Execution

screen-shot-2017-01-13-at-12-51-52-pm

Screen Shot 2017-01-13 at 12.52.15 PM.png


by theoduleblog at January 13, 2017 05:59 PM


Eugueni Antsyferov

Testing Post

Hi I am Eugueni I am testing my blog


by Genya Blog at January 13, 2017 05:45 PM


Kevin Ramsamujh

OSD600 Lab 1: Android Universal Image Loader

Android Universal Image Loader is the #1 Android library on GitHub by nostra13 and it makes image loading in Android much simpler. It offers great flexibility and numerous configuration options that will allow you to load and display images in a lot less code than would be needed without this library.

This library seems to have been started on November 27, 2011 making it around 5 years old but it is still getting updates from the community with the latest contribution being posted on January 13, 2017.

Boasting features such as:

  • Multithread image loading
  • Wide customization of ImageLoader’s configuration
  • Many customization options for every display image call
  • Image caching in memory and/or on disk
  • Listening loading process

This is an amazing library for use in any Android project that requires image loading. I have personally used this library in my final project for my Android course at Seneca which helped solve many of my problems I was having with image loading and caching. Check it out at Android Universal Image Loader.


by kramsamujh at January 13, 2017 05:40 PM


Peiying Yang

First blog post

This is your very first post. Click the Edit link to modify or delete it, or start a new post. If you like, use this post to tell readers why you started this blog and what you plan to do with it.


by pyang16 at January 13, 2017 05:38 PM


Wayne Williams

Introduction

Hello,

This is Wayne Williams starting a blog for SPO600. Hopefully some real cool things will be done this semester and we'll get to write about it!! Enjoy!

by Siloaman (noreply@blogger.com) at January 13, 2017 05:30 PM


Tony Park


Andrey Bykin

A quick look at TensorFlow

With the growth of machine learning, many people are looking into libraries and tutorials on learning more about machine learning.

TensorFlow is no exception, originally developed by engineers of Googles Brain Team, is an excellent python library providing necessary operations to do numerical computation using data flow graphs. TensorFlow was open sourced a year ago by Google, who has used it internally for their machine learning and deep neural network learning. TensorFlow is written is C++ and is highly optimized for deploying computation onto multiple CPUs and GPUs. On GitHub TensorFlow has a 600 member contributor team, with over  12,000 commits making it one of the more popular projects listed on GitHub.

If you are interested in learning more about TensorFlow, you can visit their main website at : https://www.tensorflow.org

If you want to contribute and look at the source code, you can find more information on the official TensorFlow GitHub page at : https://github.com/tensorflow/tensorflow

For more hands on tutorials and starter  here are some good starting places : Official TensorFlow Getting Started : https://www.tensorflow.org/get_started/

Website dedicated to Teaching TensorFlow : http://learningtensorflow.com/


by Andrey Bykin at January 13, 2017 05:18 PM


Nagashashank

Open Source Project

Developing for mobile if complicated, and there are many projects that help make coding easy. The project that interests me is “SlidingMenu”, because its a mobile feature that makes users navigate the mobile application easily.

SlidingMenu is an open source android project that allows us to make apps with the sliding menu layout like the YouTube, Hangouts, etc.

The default Sliding menu that Google provides is complicated to use, and project simplifies the usage of the sliding effect.

There are no other websites associated with this, it only has a GitHub page. It is developed by Jeremy Feinstein and licensed under Apache 2.0

The project is for Android, so it is written in Java. There are about 259 Open issues and 376 closed issues. 21 Developers have contributed towards this project. A lot of people are using this project, like; Foursquare, LinkedIn, VLC for Android, ESPN ScoreCenter, 9GAG, The Verge.

 


by npolugari at January 13, 2017 05:13 PM


Kevin Ramsamujh

OSD600 Introduction

My name is Kevin Ramsamujh and I am in my final semester in the CPA program at Seneca College. This is and introductory post for Lab 1 of the OSD600 course.


by kramsamujh at January 13, 2017 05:12 PM


Jerry Goguette

First blog post

Hello, my name is Jerry Goguette. I’m currently a third year Bachelor of Software Development student at Seneca College.


by jgoguette at January 13, 2017 05:05 PM


Xiao Lei Huang

Streama

Streama is a two-year-old open source software written in javascript. Streama allows users to digitize hard copies of movies and tv shows to a local/remote server and organize them in terms of categories. Streama has a beautiful user interface that mimics existing open source streaming services like popcorn time as well as the tech giant, Netflix. There are currently 88 issues, mostly regarding playback issues on multiple devices and managing files. There are 21 contributors to Streama with Dularion as the highest contributor with 107 commits.

If you want to contribute to this open source software, click here.


by dps909blog at January 13, 2017 05:01 PM


Christopher Singh

Lab 1 – OSD600: “A.W.E.S.O.M. O”

The open source community is an integral part of our society as programmers. It’s also a practical way to get involved and make a name for yourself by contributing to pre-existing projects, small or large. Finding the right project that interests you and that you feel you can contribute the most to can be a painful process.

This is why I chose to write about A.W.E.S.O.M. O, a big list of interesting open source projects. Some programmers are not fortunate enough to be efficient in all known programming languages. Thankfully, A.W.E.S.O.M. O, has divided open source projects into categories based off of languages. You can simply browse this list on its GitHub page, https://github.com/lk-geimfari/awesomo.

With 28 contributers, and hundreds of commits, projects and errors are added and fixed daily. This project was founded by Líkið Geimfari.


by cgsingh at January 13, 2017 04:50 PM


Badr Modoukh

DPS909 Lab 1

Learning a new programming language can be hard and challenging. Especially if you are new to programming.

An interesting open source project I found to solve this problem is called Exercism. Exercism gives you hundreds of practice problems in over 30 programming languages and is a place where you can get feedback on your solutions.

Exercism is specially useful for:

  • gaining fluency in your first programming language
  • ramping up in a new programming language
  • developing the skills to be a great lead developer: code review, refactoring, and mentoring

Exercism is written in Ruby and has 363 contributors who have contributed to the codebase. If you are interested in this project and want to learn more about this project its GitHub is https://github.com/exercism/exercism.io and their website is http://exercism.io/.


by badrmodoukh at January 13, 2017 06:50 AM


Dang Khue Tran

OSD600: Lab 1: Atom – “A hackable text editor for the 21st Century”

This is an open source project called Atom – a text editor that allows programmers to read and write codes with unlimited customizations. The official download page for Atom is https://atom.io/. Their GitHub repository is here.

This repository was started on August 14, 2011 with the release of version 1.0 on June 25, 2015. The project has been going for 6 years. The software is currently on version 1.13. It seems like there has not been many releases for the software but the repository has been very active through out its life time.

The project is programmed mostly in CoffeeScript with a quarter of the code is coded in JavaScript. The repository has 341 contributors with about 1,700 open issues and already closed about 8,700 issues.

Atom is available on Linux, Windows and macOS. It has a built-in package manage that you can browse and download to add more functionalities to the text editor or you can start creating our own within the text editor itself. Right out of the box, Atom has autocompletion, file system browser – enable the users to open from a single file to a whole project, multi-panes to maximize your screen real estate and basic features like find and replace. It comes pre-installed with 4 UIs and 8 syntax themes for you to choose from and packages that you can always disable if you don’t want the features.

Atom is my text editor of choice and I would definitely try to contribute to this project.

 


by trandangkhue27 at January 13, 2017 06:33 AM


John James

OSD600 Lab 1

When I was first told to explore the world of open source projects I was very hesitant, since it is the unknown for myself. I started getting fears like, what if I don’t find anything I will like or something I can contribute too. These fears felt rational, but also seem to be common for most people who wanna try to work within open source. About an hour of looking for open source projects on

About an hour of looking for open source projects on GitHub I found one that felt like “This title defines my entire life”, It was called “openfarm” and its description is “is a free and open database and web application for farming and gardening knowledge. One might think of it as the Wikipedia for growing plants, though it functions more like a cooking recipes site. The main content are Growing Guides”. It looks like this project started about three years ago, and has had a lot of developing, the website can be found at https://openfarm.cc/. Unfortunately for me, this project is mainly coded in Ruby, a language I don’t have any practice in, but after looking at some of the code they have, I feel like I can pick it up very easily.

If you wanna look into this project, check out its GitHub https://github.com/openfarmcc/OpenFarm. I think this is a very cool project, that can be informative and help people start up a garden or create a farm.


by johnjamesa70 at January 13, 2017 04:32 AM


Timothy Moy

OSD600 Lab 1: atom by atom

Atom is a cross platform hack-able text editor developed for public use. Aside from being cross platform, it solves the problem of users being unable to customize their text editors to their liking/for their specific purpose. Although it can be fully customized, it starts off as a full featured text editor for those who don’t need the advanced functionality.

According to their github page, it started August 14th, 2011 and is approximately 6 years old. Their public beta was released early 2014. You can find their official website here and their official forum here. It is written in HTML, JavaScript, CSS and Node.js integration. It runs on the electron framework. On its github page, there are 1682 open issues at the time of writing. 341 people have contributed to the code thus far.

Atom is used by developers and programmers as development environments. Notable companies also have used it and contributed features like MuleSoft’s API Workbench, Jibo Robot’s SDK tools, and Facebook’s Nuclide (http://blog.atom.io/2016/03/28/atom-reaches-1m-users.html). With over a million users, it seems atom has grown into something much bigger than when it first started.

 


by Timothy Moy at January 13, 2017 04:26 AM


Badr Modoukh

Welcome to my blog

Hi, my name is Badr Modoukh and I am currently in my last semester in the BSD (Software Development) program at Seneca@York. I enjoy web development, web design, debugging software, and programming in Java. I am interested in getting involved in the open source community and learning as much as I can.


by badrmodoukh at January 13, 2017 03:56 AM

January 12, 2017


Shivam Gupta

Lab1 – DPS909 – Shivam Gupta

Hi Guys, My name is Shivam Gupta.The reason I took open source development is because I feel like it gives an indirect insight and experience  of the projects and work ethics required to be successful at the workplace.

Today, I am going to talk about an open source project name freeCodeCamp.This is a non-profit project, which started 3 years ago and has over 543 contributors.The project was developed using HTML and Javascript.

The goal behind this project is to help develop students powerful skill in  programming language such as HTML5, CSS3, JavaScript, Database, Git& Github, Node.js, React.js and d3.js .

The way it works is that the students choose a particular language that they want to learn and then the skills are divided into lesson that you have to progress too. The website provides feedback/certificate when a certain level is reached.

Being an open source project. The website is looking to add more languages that students can learn/enhance skills in.

The project can be found at

https://github.com/freeCodeCamp/freeCodeCamp

 


by sgupta44blog at January 12, 2017 03:48 PM


Ray Gervais

Kickstarter Open Sourced Android and iOS Applications

Kickstarter Android Application

OSD600 Open Source Blog

“Welcome to Kickstarter’s open source Android app! Come on in, take your shoes off, stay a while—explore how Kickstarter’s native squad has built and continues to build the app, discover our implementation of RxJava in logic- filled view models, and maybe even create an issue or two.”

Kickstarter, a company based around crowd funding and early adopting open sourced their respective iOS and Android applications on February 8, 2015. This the engineers response to Kickstarter becoming a Public Benefit Corporation, seeing open sourcing of their work could provide rich resources and ideas into the developer community as many others did.
The first two pull requests related to interface modifications and accessibility improvements. Expanding upon their knowledge and commitment to functional programming, the Kickstarter engineers created a lucrative experience for those browsing their source code by providing screenshots of the interface, excellent commenting, well rounded testing and their philosophy on view models.

The applications are written in the mobile devices native language, Swift and Java respectively for iOS and Android, and utilize many frameworks and third party extensions of each language.
The application is licensed with the Apache Version 2 license, which is explained here:

“The Apache License is permissive in that it does not require a derivative work of the software, or modifications to the original, to be distributed using the same license (unlike copyleft licenses – see comparison). It still requires application of the same license to all unmodified parts and, in every licensed file, any original copyright, patent, trademark, and attribution notices in redistributed code must be preserved (excluding notices that do not pertain to any part of the derivative works); and, in every licensed file changed, a notification must be added stating that changes have been made to that file.”

As of this moment, eleven issues on GitHub are open, many relating to the build process of the Android application for those wanting to extend the application beyond the original engineers implementations. Even more so, as pull requests are merged back into the code base after review, these updates are then patched into the next update of the application available to end users on the platforms app store. Brandon Williams, original developer of the iOS version has expressed interest in writing Kotlin code in their Android application while taking suggestions from the developer community.

by RayGervais at January 12, 2017 03:26 AM

January 11, 2017


Len Isac

Software Portability & Optimization

This section of my blog will include all topics related to my SPO600 course at Seneca.

Blog topics will include:

  • Open source code review
  • Assembly Language
  • Compiler options & optimization
  • Algorithm Selection
  • Computer Architectures
  • Course project stages: I, II, & III

by Len Isac at January 11, 2017 10:58 PM


Matt Welke

Integrating New Logging into VAT

Now that I’ve finished making the back end API end points for the new error log displaying part of our project, I’m going to be integrating it into the VAT. They will have the option to display either a query (to perform) or the logs (to examine). Because we’ve migrated the VAT over to Redux framework since we completed it in December (thanks to the work of my team mate), this should mean just modifying our Redux store to include the ability to manage log data as part of its state. Then some actions… some React components to show the data… and voila! (Let’s hope it does indeed end up being simple… I’m not too worried though)

My end points for my logging API ended up looking like this:

  • GET /logs/api/summary

    which returns a summary of all the errors, dividing them into categories (client module, pushApi, getApi) and then groups (named using appropriate fields from each category of error).

  • GET /logs/api/summary?categoryName=String&groupName=String

    which returns an array of errors, including all the information contained within each error, given a groupName for that error and the category to which the error belongs.


by Matt at January 11, 2017 10:20 PM


Timothy Moy

Hello World

My name is Timothy Moy and at the time of writing am a student at Seneca College. I am studying computer programming and hope that this blog will be of use to others in the open source community.


by Timothy Moy at January 11, 2017 09:10 PM


Ray Gervais

Source Code to 2017

With the start of the new year, and a semester which contains a promising set of courses that many are excited for, it’s appropriate that open source technologies have become the leading topic of this semester. OSD600 and SPO600 aim to guide us on many topics related to open source platforms, and promise that our contributions will benefit the everyday consumer in a variety of ways. With open source, the opportunities to shape the upcoming state of technology are endless, allowing us to contribute to the source code which will make up 2017.

by admin at January 11, 2017 07:54 PM


Henrique Coelho

Concurrent functions with Go using channels

I've always been a big fan of new technologies and languages: there is always something new and interesting in them. For the past weeks, I've been experimenting with Go, a free and open-source programming language made by Google. It is imperative, strong typed, and compiled, and with a syntax that remind me of C; just like C, it also has pointers, but it has a garbage collector.

One thing that really caught my attention was how concurrent (asynchronous) functions can be made and synchronized: they are called goroutines, described as "light-weight threads of execution", and can be synchronized using channels - a First In First Out queue. In this post, I want to show how I one example of how these goroutines with channels work.

This example is a little application that simulates a pizzeria: we will have a line that makes the sauce, a line that makes the dough, and a line that prepares the toppings; after all these 3 lines have their ingredients ready, the pizza is assembled and baked, and then a receipt is printed. In normal synchronous programming, first, we would make the sauce, then the dough, then we would prepare the toppings, assemble and bake, and then print the receipt. In asynchronous programming, however, we can fire the functions to make the sauce, the dough and prepare the toppings all at the same time, then, after we have all these 3 steps, we can assemble the pizza and bake it. In asynchronous JavaScript, we would start executing the three first functions, and then, when the last one was finished, it would execute a callback to assemble and bake the pizza, and then, it would execute another callback to print the receipt. In Go, we can use Goroutines and channels for this task.

To keep things simple, let's suppose that every line (the line that makes the dough, for example) can work on several orders at the same time. For example: they can make the pizza dough for 3 clients at the same time.

First, I'll declare the name of my package and make the imports for the modules I need:

package main

import (
    "fmt"
    "math/rand"
    "time"
    "sync"
)

Now I will make a struct (as far as I know, there are not classes in Go, only structs; but you can attach methods to them and turn them into classes) for a Pizza. A pizza will have a client (name of the client, string), some details about it (how the dough was made, how the sauce was made, etc. They will be a channel (I will show you why later) of a string), some boolean values that indicate which steps were completed (they are also channels), and a function that can be called when everything is ready and the pizza is finished.

type Pizza struct {
    client  string
    details struct {
        dough     chan string
        sauce     chan string
        toppings  chan string
        assembled chan string
    }
    completed struct {
        dough     chan bool
        sauce     chan bool
        toppings  chan bool
        assembled chan bool
    }
    Done func()
}

I also made a little function that will give me a random integer so every step will take a different amount of time to be completed:

func randomTime() time.Duration {
    r := time.Duration(rand.Int31n(9))
    return time.Second * r;
}

Now, the three functions that will be ran at the same time: makeDough, makeSauce and prepareToppings. They are just normal functions, the difference is how they get executed; this is what makeDough looks like:

// This function receives the name of the client, a string channel for it
// to record a message (details), and a bool channel for it to record when the dough
// is ready
func makeDough(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting making pizza dough for #", client, "\n")

    // We take a random amount of time for the function to be completed
    time.Sleep(randomTime())

    fmt.Print("Finished pizza dough for #", client, "\n")

    // Recording the message and "true" in the channels
    // You can imagine the channel as being "cout" from C++
    // and the <- operator being "<<": you are recording
    // something into the channel
    message <- "Pizza Dough"
    completed <- true
}

And here are the other functions:

func makeSauce(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting making pizza sauce for #", client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished pizza sauce for #", client, "\n")

    message <- "Pizza Sauce"
    completed <- true
}

func prepareToppings(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting preparing pizza toppings for #", client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished preparing pizza toppings for #", client, "\n")

    message <- "Pizza Toppings"
    completed <- true
}

Simple enough, right? Channels are like queues, where you input data, and then you can pop it later. But here is the catch: channels will block the execution of the function until the other "side" is ready; in other words: if you push something in the channel, it will block the function until you pop it - it also works on the opposite: if you try to pop something from an empty channel, it will block the function until there is something there to be popped. This can be used to pause/unpause goroutines.

Now, if you go back to the functions that I described, you can imagine what is going to happen in this case:

func prepareToppings(client string, message chan<- string, completed chan<- bool) {
    fmt.Print("Starting preparing pizza toppings for #", client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished preparing pizza toppings for #", client, "\n")

    // The following line will be executed and then the goroutine will stop: it will
    // only continue when we remove the string from the channel
    message <- "Pizza Toppings"

    // This line will only be executed when the message "Pizza toppings" is
    // removed from the channel above
    completed <- true
}

So, to make sure we don't reach a deadlock, we must make sure that the channels are properly emptied and closed: I will show how to extract the data from a channel and how to close them in this next function. This function will listen to the "completed" channels to make sure the sauce, the dough, and the toppings are prepared - we can only assemble and bake the pizza if we have these three parts ready:

func assembleAndBake(pizza Pizza) {

    // Here I am extracting the boolean from the "dough" channel. Since
    // we don't care about the values, we just discard them
    // Notice that the execution will be blocked here until there is a "completed" value for
    // dough that we can pop; in other words: the function will not execute past this
    // until we get a boolean from the "makeDough" function
    <- pizza.completed.dough

    // After we got the message, we can close the channel to prevent any more writing into it
    close(pizza.completed.dough)

    <- pizza.completed.sauce
    close(pizza.completed.sauce)

    <- pizza.completed.toppings
    close(pizza.completed.toppings)

    fmt.Print("Starting assembling and baking pizza for #", pizza.client, "\n")

    time.Sleep(randomTime())

    fmt.Print("Finished assembling and baking pizza for #", pizza.client, "\n")

    // If we reached here, it means that the pizza is now assembled and baked: we
    // record a message and a boolean for this event in the channels
    pizza.details.assembled <- "Assembling and baking"
    pizza.completed.assembled <- true
}

Now I am going to receive the details (message) in my function to print the receipt - I want to print the messages in the receipts. This is how my function look like:

// This function receives the Pizza object
func printReceipt(pizza Pizza) {

    // "defer" tells the function to execute this line only when the function finishes: it will
    // tell the program that this pizza is done and the "chain" is over for this client.
    // I will explain what this part does in more details later - I need to show you the 
    // rest of my script first. For now, just ignore it.
    defer pizza.Done()

    // Here I am taking whatever message we have in the details for the dough and
    // recording it in a variable called 'msg1'.
    msg1 := <- pizza.details.dough
    close(pizza.details.dough)

    msg2 := <- pizza.details.sauce
    close(pizza.details.sauce)

    msg3 := <- pizza.details.toppings
    close(pizza.details.toppings)

    msg4 := <- pizza.details.assembled
    close(pizza.details.assembled)

    // Here I am popping the boolean value from the "completed" field of the pizza. Since
    // I don't really care what the value is, I am not saving it anywhere
    <- pizza.completed.assembled
    close(pizza.completed.assembled)

    // If we reached here, it means that the pizza was assembled and baked - we can now
    // print the receipt
    fmt.Print("--------------------------------------------------\n" +
              "Receipt for #", pizza.client, ":\n" +
              ". ", msg1, "\n" +
              ". ", msg2, "\n" +
              ". ", msg3, "\n" +
              ". ", msg4, "\n" +
              "--------------------------------------------------\n")
}

Alright, these are the functions we need to assemble the pizza, now we just need the main function.

The main function will be responsible for launching the goroutines for three different clients: John, Alan and Paul. However, it also needs to wait for their orders to finish before the process exits - how can we ensure this will happen?

To make sure our process will not exit before everything is done, we can use a WaitGroup: imagine it as an class that you start and can specify how many groups you want to wait for (in this case, three: one for every client), and every time a group is completed, it calls the function waitGroup.Done() - so when all of them were called, the waitGroup is finished.

This is how my main function looks like:

func main() {
    // Seeding a random time
    rand.Seed(time.Now().UTC().UnixNano())

    // Creating a wait group
    var wg sync.WaitGroup

    // Making a list of clients
    clients := []string {
        "John",
        "Alan",
        "Paul",
    }

    // Getting the number of clients
    clientsNo := len(clients)

    // Looping through every client
    for i := 0; i < clientsNo; i++ {

        // For every client, we add one more group in the WaitGroup
        wg.Add(1);

        // Instantiating a new Pizza for the client
        pizza := Pizza{}
        pizza.client = clients[i]
        pizza.details.dough     = make(chan string)
        pizza.details.sauce     = make(chan string)
        pizza.details.toppings  = make(chan string)
        pizza.details.assembled = make(chan string)
        pizza.completed.dough     = make(chan bool)
        pizza.completed.sauce     = make(chan bool)
        pizza.completed.toppings  = make(chan bool)
        pizza.completed.assembled = make(chan bool)

        // This part is important: remember that line that I "deferred" a method call for
        // Done()? This is where it comes from: when the pizza is done, it tells the
        // WaitGroup that there is one less group to wait for
        pizza.Done = wg.Done

        // Here we are launching the asynchronous functions: the "go" prefix specifies
        // that these are not ordinary functions, but goroutines. To these routines, I am
        // passing the channels and other data they need
        go makeDough(pizza.client,       pizza.details.dough,    pizza.completed.dough)
        go makeSauce(pizza.client,       pizza.details.sauce,    pizza.completed.sauce)
        go prepareToppings(pizza.client, pizza.details.toppings, pizza.completed.toppings)
        go assembleAndBake(pizza)
        go printReceipt(pizza)

    }

    // Here we are telling the WaitGroup to wait until all the groups are done
    wg.Wait()

}

These are the outputs:

For only one client (Paul)

Starting preparing pizza toppings for #Paul
Starting making pizza sauce for #Paul
Starting making pizza dough for #Paul
Finished pizza dough for #Paul
Finished pizza sauce for #Paul
Finished preparing pizza toppings for #Paul
Starting assembling and baking pizza for #Paul
Finished assembling and baking pizza for #Paul
--------------------------------------------------
Receipt for #Paul:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------

For all three clients

Starting making pizza dough for #Alan
Starting preparing pizza toppings for #John
Finished preparing pizza toppings for #John
Starting preparing pizza toppings for #Alan
Starting making pizza sauce for #Alan
Starting making pizza sauce for #Paul
Starting making pizza dough for #Paul
Starting preparing pizza toppings for #Paul
Starting making pizza sauce for #John
Starting making pizza dough for #John
Finished pizza dough for #Alan
Finished pizza dough for #Paul
Finished preparing pizza toppings for #Alan
Finished pizza dough for #John
Finished pizza sauce for #Paul
Finished preparing pizza toppings for #Paul
Starting assembling and baking pizza for #Paul
Finished pizza sauce for #Alan
Starting assembling and baking pizza for #Alan
Finished pizza sauce for #John
Starting assembling and baking pizza for #John
Finished assembling and baking pizza for #Alan
--------------------------------------------------
Receipt for #Alan:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------
Finished assembling and baking pizza for #John
--------------------------------------------------
Receipt for #John:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------
Finished assembling and baking pizza for #Paul
--------------------------------------------------
Receipt for #Paul:
. Pizza Dough
. Pizza Sauce
. Pizza Toppings
. Assembling and baking
--------------------------------------------------

by Henrique Salvadori Coelho at January 11, 2017 04:29 PM


Igor Naperkovskiy

Introduction

Hi, my name is Igor Naperkovskiy, I’m a 4th year BSD student currently doing my 2nd co-op at Scotiabank and taking DPS909 at the same time, hopefully it works out well. I’m really exited to take this course and expect to learn a lot from it.


by naperkovskiy at January 11, 2017 04:02 PM


John James

Testing for lab 1

Today I found out that I will have to be blogging in two of my classes this semester (OSD600 and SPO600). I’ve never blogged before so I hope this works


by johnjamesa70 at January 11, 2017 08:50 AM

January 09, 2017


Matt Welke

Making Error Logs Great Again

Today I continued work on a nicer log display. Right now we have a route for our Get API that queries the database and the file system of the EC2 instance for all errors that were logged from every part of our application and displays them. It’s a ton of information, and it’s all jumbled together and displayed on the DOM at once. This slows it down. You click on a tab (the only organization it has) and it hangs for a few seconds while it adds all the data to the DOM. I’m writing new API end points for the Get API. The errors will be grouped, and one end point will query for and display just the summary of information about the errors (counts for each error type, ratios, etc), and the other end point will get the actual errors to display. I might use React components for this or just plain JS if it doesn’t need much.

The result will be something that can show you useful information about the kinds of errors it’s experiencing without having to download them all. The interface will be snappy. It will be a more useful tool to use to develop and for anyone to observe in the future after launch.

Some challenges I had developing this were wrapping my head around the kind of data the API end points need to send to the browser. It took me a while to realize that less is more. The summary end point should not even display any errors at all. I also had to think carefully about how to categorize the error types and what that mean for the MongoDB query syntax.


by Matt at January 09, 2017 09:55 PM

January 05, 2017


Henrique Coelho

A problem with Redux: how to prevent the state from growing forever

In my last blog post I explained how we used Redux to organize the data flow in an application; however, Redux has a rare problem that doesn't seem to have a simple solution (with simple, I mean not having to install another 26 libraries): as we create new states, the old states get archived, and this can mean several megabytes of data stored in the client.

Now, there are good reasons why this should not be a problem for 99% of the applications:

  1. When we make a new state, we create a shallow copy of the previous one, not a deep copy. This means that the references will still point to the same data, except for the ones that changed. In other words, if your old state took 200kb and your new state created another 1kb, the total amount will be 201kb, and not 401kb.
  2. Most websites don't store that much data, so even if you use the same single-page app for days, you'll likely not even reach 1Mb

Despite being rare, it is a problem. So how can we solve it?

I'll explain it with an example: an application in which you can turn a lightbulb on and off, and also select its colour (red, green, and blue). It also has a little side effect: if the lamp is off and you change its colour, it turns on.

I will first make an application using only React + Redux, and then, I will use Flux (a paradigm similar to Redux, but that only stores one state instead of the whole archive) to solve the problem.

This is how we could build this application with React + Redux:

Observations:

I will make this application in a single file, so if you simply copy and paste the code below in order, it should work.

This is how my imports look like:

import React from 'react';
import ReactDom from 'react-dom';
import { Provider, connect } from 'react-redux';
import { createStore, combineReducers } from 'redux';

We should also try to imagine how the state would look like in order to plan our reducers. Since we need to store colour and power of the lightbulb, we could model our state this way:

// This is not actual code, but just a representation of how the state would look like
// You do not need this in the file
{
  isOn: false,
  colour: '#FF0000'
}

This means that we will have 2 reducers: isOn and colour.

#2 Making the actions to toggle the light on/off and also the colour

// Action types
const TOGGLE_LIGHT = 'TOGGLE_LIGHT';
const CHANGE_COLOUR = 'CHANGE_COLOUR';


// Actions
const actions = {

  // Receives nothing
  toggleLight() {
    return {
      type: TOGGLE_LIGHT,
    };
  },

  // Receives a value for the new colour
  changeColour(value) {
    return {
      type: CHANGE_COLOUR,
      payload: value,
    };
  },

};

#3 Now we create the reducers

// Reducers
const reducers = {

  // Reducer for the 'isOn' attribute
  isOn(state = false, action) {
    const type = action.type;

    switch(type) {
      case TOGGLE_LIGHT:
        return !state;
        break;

      // When the user changes a colour, we turn
      // on the lights
      case CHANGE_COLOUR:
        return true;
        break;

      default:
        return state;
        break;
    }
  },

  // Reducer for the 'colour' attribute. The default
  // colour will be red
  colour(state = '#FF0000', action) {
    const type = action.type;
    const payload = action.payload;

    switch(type) {
      case CHANGE_COLOUR:
        return payload;
        break;

      default:
        return state;
        break;
    }
  },

};

#4 Combine all the reducers into a root reducer

Now that we have all the reducers, we must put them together into a single one:

// The root reducer groups all the other reducers together
const rootReducer = combineReducers({
  isOn: reducers.isOn,
  colour: reducers.colour,
});

#5 Create the store

Now we create the store and pass the root reducer:

// Store
// The resulting state that we get from the reducers
// would look like this, if the light was turned on
// and the colour was green:
// { isOn: true, colour: '#00FF00' }
const store = createStore(rootReducer);

#6 Create the React component

In this case, I am using a shorthand for creating React components: it receives the props isOn, colour, toggle (function), and changeColour (function):

// React component for the lightbulb
const Lightbulb = ({
  isOn,
  colour,
  toggle,
  changeColour,
}) => (
  <div>
    {isOn ? (
      <span style={{ color: colour }}>ON</span>
    ) : (
      <span>OFF</span>
    )}
    <br />
    <button onClick={toggle}>Turn {isOn ? 'off' : 'on'}!</button>
    <button onClick={() => changeColour('#0000FF')}>Blue light</button>
    <button onClick={() => changeColour('#00FF00')}>Green light</button>
    <button onClick={() => changeColour('#FF0000')}>Red light</button>
  </div>
);

#7 Bind the React component to Redux

Here I am using the connect function provided by Redux to connect the component to our state and dispatcher:

// Element to be rendered (Lightbulb connected to Redux)
const LightbulbElement = (() => {

    const mapStateToProps = (state) => ({
    isOn: state.isOn,
    colour: state.colour,
  });

  const mapDispatchToProps = (dispatch) => ({
    toggle() {
      dispatch(actions.toggleLight());
    },

    changeColour(colour) {
      dispatch(actions.changeColour(colour));
    },
  });

  return connect(
    mapStateToProps,
    mapDispatchToProps,
  )(Lightbulb);

})();

#8 Make the Application component

The Application component will be the main component: we will use the Provider component from redux in order to bind the store:

// Application (the element with redux bound to the store)
const Application = (
  <Provider store={store}>
    <LightbulbElement />
  </Provider>
);

#9 Rendering

Now we can render the Application component in the dom:

// Rendering the app in the #app div
ReactDom.render(Application, document.getElementById("app"));

Done!

Ok, here is the problem: what if "colour" was actually a string of 20Mb? If you don't care about the old versions, you probably should not be archiving them. To solve this problem, we could implement our own separated store only for the colour; this store would be responsible for keeping only the newest version of the string and notify the components when it gets changed.

This is very similar to what Flux does (another pattern, like Redux), so I am going to use it in my solution. Ok, I know I said I did not want to "use 26 more libraries", and I do recommend you to build your own methods for it; in this case, however, I am going to use Flux and its libraries because 1- this is just a quick explanation, 2- it's fun, 3- I feel like doing it. Sorry.

Observations

Again, this application will be in a single file, so you can just copy and paste the code.

My imports:

import React from 'react';
import ReactDom from 'react-dom';
import { Provider, connect } from 'react-redux';
import { createStore, combineReducers } from 'redux';

// Two new imports:
import { Dispatcher } from 'flux';
import { EventEmitter } from 'events';

In this case, the state would be different: we no longer will be holding the colour, only the isOn attribute:

// This is not actual code, but just a representation of how the state would look like
// You do not need this in the file
{
  isOn: false
}

#1 Creating the Flux dispatcher

For Flux, we need to instantiate our own dispatcher:

// Flux dispatcher
const dispatcher = new Dispatcher();

#2 Making the actions to toggle the light on/off and also the colour

The actions are going to be almost identical, with one exception: the action for changing the colour will be returned and dispatched to Flux:

// Action types
const TOGGLE_LIGHT = 'TOGGLE_LIGHT';
const CHANGE_COLOUR = 'CHANGE_COLOUR';

// Actions
const actions = {

  toggleLight() {
    return {
      type: TOGGLE_LIGHT,
    };
  },

  changeColour(colour) {

    // Action to be returned and dispatched
    const act = {
      type: CHANGE_COLOUR,
      payload: colour,
    };

    // Flux dispatch
    dispatcher.dispatch(act);

    return act;
  },

};

#3 Now we create the reducers

Since we are not storing the colour in the Redux state anymore, we will only have one reducer: isOn. We will listen to the CHANGE_COLOUR action, but only to toggle the lights on if we change them - the new colour will be ignored.

// Reducers
const reducers = {

  // Reducer for the 'isOn' attribute
  isOn(state = false, action) {
    const type = action.type;

    switch(type) {
      case TOGGLE_LIGHT:
        return !state;
        break;

      // When the user changes a colour, we turn
      // the lights on
      case CHANGE_COLOUR:
        return true;
        break;

      default:
        return state;
        break;
    }
  },

};

#4 Creating the root reducer and the Redux store

These steps are almost the same, but now I only have one reducer:

// The root reducer groups all the other reducers together
const rootReducer = combineReducers({
  isOn: reducers.isOn,
});


// Redux Store
// The resulting state that we get from the reducers
// would look like this, if the light was turned on
// and the colour was green:
// { isOn: true }
const store = createStore(rootReducer);

#5 Creating the Flux store to hold the colour

This part is new: this is where we will store the colour of the lightbulb, also providing a method for components to listen to the store in case it changes (using an event emitter) and providing a method to set a new value.

// Flux store for the colour: the store can emit events, so we
// inherit methods from the EventEmitter
const colourStore = (() => {
  let cache = '#FF0000';

  return Object.assign({}, EventEmitter.prototype, {

    // Getters and setters
    getColour() { return cache; },
    _setColour(v) { cache = v; },

  });
})();


// Registering the Flux colour store in the dispatcher: when we
// dispatch an action, we'll check if it is of the right type, and
// then we'll set the colour in the store
dispatcher.register((action) => {
  switch(action.type) {

    case CHANGE_COLOUR:
      colourStore._setColour(action.payload);

      // When the store changes, we emit an event to notify
      // the components that are subscribed
      colourStore.emit('change');
      break;

  }
});

#6 Creating the React component

This React component will not be as simple as the previous one: it will have a state; the state will carry the colour of the lightbulb. When we create the component, we get the initial state from the store (colourStore.getColour()) and we will also subscribe to the store (celularStore.on('change', () => { ... })): when the store changes, we will get the new colour and set the new state (this.setState).

// React component for the lightbulb
class Lightbulb extends React.Component {

  constructor(props) {
    super(props);

    // Getting the initial state
    this.state = { colour:  colourStore.getColour() };

    // Listening for changes in the store: we update the
    // state whenever it changes
    colourStore.on('change', () => {
      this.setState({ colour: colourStore.getColour() });
    });
  }

  render() {

    // We are not getting the colour from the props anymore
    const {
      isOn,
      toggle,
      changeColour,
    } = this.props;

    return (
      <div>
        {isOn ? (
          <span style={{ color: this.state.colour }}>ON</span>
        ) : (
          <span>OFF</span>
        )}
        <br />
        <button onClick={toggle}>Turn {isOn ? 'off' : 'on'}!</button>
        <button onClick={() => changeColour('#0000FF')}>Blue light</button>
        <button onClick={() => changeColour('#00FF00')}>Green light</button>
        <button onClick={() => changeColour('#FF0000')}>Red light</button>
      </div>
    );
  }
}

#7 Binding the React component to Redux, making the Application component, and Rendering

Everything is the same now, except that we are not passing the colour as a prop anymore:

// Element to be rendered (Lightbulb connected to Redux)
const LightbulbElement = (() => {

    const mapStateToProps = (state) => ({
    isOn: state.isOn,
  });

  const mapDispatchToProps = (dispatch) => ({
    toggle() {
      dispatch(actions.toggleLight());
    },

    changeColour(colour) {
      dispatch(actions.changeColour(colour));
    },
  });

  return connect(
    mapStateToProps,
    mapDispatchToProps,
  )(Lightbulb);

})();


// Application (the element with redux bound to the store)
const Application = (
  <Provider store={store}>
    <LightbulbElement />
  </Provider>
);


// Rendering the app in the #app div
ReactDom.render(Application, document.getElementById("app"));

Done! Redux will keep a history of the isOn property of the lightbulb, but the colour will not be archived.

by Henrique Salvadori Coelho at January 05, 2017 08:25 PM

January 04, 2017


Henrique Coelho

Organizing data flow with React + Redux

One part of our application had its frontend made with React, taking advantage of its reactivity to state changes, which is very helpful when you are building modern and responsive applications. However, we underestimated the complexity of this system, and maintaining it with React only became very complicated and tiresome; this is when we decided to adopt one more paradigm: Redux.

This will not be a tutorial, instead, I will only want to present a general idea of how all these tools work.

I will first make a quick introduction on how React works: the easiest way to understand it, for me, is by imagining it as a way to make custom HTML elements. For example, say you have the following pattern:

<div>
  <h1>Header</h1>
  <p>Body text goes here<p>
</div>

Wouldn't it be nice if instead of typing all these divs, h1s and ps, you were able to make a custom element with that format (maybe call it Section)? With React, it would be easy:

class Section extends React.Component {

  render() {
    return (
      <div>
         <h1>{this.props.title}</h1>
        <p>{this.props.children}<p>
      </div>
    );
  }

}

Props are parameters passed to the component (like an html attribute or children), they are recovered from the this.props object.

Now to render this element with React:

<Section title="Header">
    Body text goes here
</Section>

React also has the concept of State, which refers to the mutable state of a component. For example: a lightbulb of 60W would have "60W" as a prop, but whether it is on or off, it will depend on its state.

States are very easy to work with, we set the initial state in the constructor, and every time we need to modify it, we use the method this.setState to pass the new state. The component will update itself automatically.

class Lightbulb extends React.Component {

  constructor(props) {
    super(props);
    this.state = { isOn: false };
  }

  toggle = () => {
    this.setState({
      isOn: !this.state.isOn,
    });
  }

  render() {
    let message;
    if (this.state.isOn) {
      message = 'On!';
    }
    else {
      message = 'Off!';
    }

    return (
      <div>
        {message}
        <button onClick={this.toggle}>Click me!</button>
      </div>
    );
  }

}

But things start to get complicated when our application grows: sometimes we need to access the state of a component from another component, sometimes they need to be shared: for this, we have to remove the state from the component and pass it to its parent, so the component only receive its values as props.

The tendency, therefore, is that all the state will end up in the root component, and all the child components will only receive props: all the state lives in the root component, which are passed down the tree as props; similarly, whenever an event happen on the bottom of the tree, it will bubble up to the top.

This is when better paradigms start to appear: the most popular used to be Flux, and now, it is Redux.

Redux is more a paradigm than a library - you don't need to use the library, but they do provide you some boilerplate code. It also respects this tendency of all the state living in a single root, which is called the store: the store is an object that contains the state of the whole application; and this is an important detail: you do not modify the state that lives in the store, you create a new "version" of this state - the old states get archived - this makes logging and debugging extremely easy. When you use the store provided by the Redux library, it will take care of recording the old states for you.

Myself, I would abstract the data flow of React + Redux in 5 simple steps:

  1. A component triggers an action (example: a button is clicked)
  2. The action is sent to the reducer (example: turn on the light)
  3. The reducer creates a new version of the state, based on the action (example: { lightsOn: true })
  4. The store gets updated with the new state
  5. The component gets re-rendered based on the new state

#1 A component triggers an action

To make the component trigger an action, we simply pass the function (action) as a prop - the component will then call it whenever the right event happens:

// In the lines below, we are binding the state from the store,
// as well as a function that dispatches the action to toggle
// the lights on/off. The "dispatch" function is provided
// by the Redux library - we only need to make the "toggleLight"
// action ourselves

const mapStateAsProps = (state) => ({
  isOn: state.isOn,
});

const mapDispatchAsProps = (dispatch) => ({
  toggle: () => {
    dispatch(actions.toggleLight());
  }
});

// The 'connect' function is also provided by the Redux
// library, it binds the props and methods to the React
// component
const LightbulbElement = connect(
    mapStateAsProps,
    mapDispatchAsProps,
)(Lightbulb);


class Lightbulb extends React.Component {

  render() {
    let message;
    if (this.props.isOn) {
      message = 'On!';
    }
    else {
      message = 'Off!';
    }

    return (
      <div>
        {message}
        <button onClick={this.props.toggle}>Click me!</button>
      </div>
    );
  }

}

And to render this element:

<LightbulbElement />

#2 The action is sent to the reducer

An action is sent to all reducers automatically every time we use the dispatch method I described above. But what does that toggleLight look like? Like this:

function toggleLight() {
  return {
    type: 'TOGGLE_LIGHT,
  };
}

Actions usually return objects with 1 or 2 parameters: type and payload. The type parameter refers to what kind of action you are performing: every action should have a distinct type. The payload parameter contains additional information that you need to pass to the reducer.

#3 The reducer creates a new version of the state, based on the action

Reducers are responsible for replacing the current state of the application with a new one. For every attribute in the state (for example, say our state object contains the attributes "isOn" and "colour"), we should have a distinct reducer - this will ensure that one reducer will not modify an attribute that does not belong to it.

In our case, since we only have one attribute (isOn), we would create only one reducer; it would check the action type to make sure that piece of the state should be changed, if it should, it must create a new version of the state and return it:

// This function receives "state", which is the previous state in our store,
// and "action", which is the action dispatched
function isOnReducer(state = false, action) {
  switch(action.type) {

    case 'TOGGLE_LIGHT':
      return !state;
      break;

    default:
      return state;
      break;

  }
}

In other scenario, say we are receiving a payload and we are going to modify a state that is an object:

function myOtherReducer(state = { colour: 'black', opacity: 1.0 }, action) {
  switch(action.type) {

    case 'CHANGE_COLOUR':
      // Notice that I am using the spread operator (...) to create a new object
      // and recover the values of the previous state; then overriding the colour
      // with what I received from the payload
      return { ...state, colour: action.payload };
      break;

    case 'CHANGE_OPACITY':
      return { ...state, opacity: action.payload };
      break;

    default:
      return state;
      break;

  }
}

#4 The store gets updated with the new state

This part is done automatically by Redux, we only need to give it our reducer:

import { createStore } from 'redux';

import { isOn } from './reducers';

const store = createStore(isOn);

export default store;

#5 The component gets re-rendered based on the new state

This is also done automatically. Redux will detect if the parts of the store that a component uses changed - if it did, the component will get re-rendered.

by Henrique Salvadori Coelho at January 04, 2017 08:58 PM

January 03, 2017


Matt Welke

The UAT and Caching

Today we got back up to speed on our ideas with the UAT (User Affinity Tool). We will be using it to generate information that Engineering.com’s ElasticSearch can use in its queries to drive better recommendations for their users.

There are a few problems we need to address as we begin to work on it. The first problem is caching. We need to be able to prevent the UAT from being overloaded by lots of users, but moreso we need to make sure that if a user has to wait a while for a recommendation query to finish (and it might take a while, because of the massive amount of data), that they only have to wait for this to complete once in a while. We plan to cache the results of our UAT’s user behavior queries for a while, probably about an hour.

It turns out caching was a lot more powerful than I previously realized, in that it offers a lot of flexibility for various degrees of the aggressiveness of your caching, where the more aggressive it is, the less data that needs to be transmitted over the network if any at all. Google’s developer documentation, which I had glanced at before, helped immensely with understanding HTTP caching. Put simply, the degrees of caching aggressiveness are:

  1. Most aggressive – From the server, set the “max-age” attribute of the “Cache-Control” header to a high number (of seconds). For example, use 31,536,000 for one year. The browser will use its locally cached response for an entire year before ever sending a request to your server again. No request sent to server on subsequent calls for a long time.
  2. From the server, set the max-age attribute to something reasonable, for example, one minute, for something that might not be expected to change very often. This can help with performance. No request sent to server on subsequent calls for a little while.
  3. From the server, do nothing at all to the request your HTTP library is about to send. This means that the “ETag” header will still be created and sent, which acts as hash code representing your response. If the max-age wasn’t set, it’s still cached by the client, it’s just immediately considered “stale” and would normally not be used on subsequent requests. The client will send you an ETag too as part of the subsequent requests, and if that ETag matches your (server side) ETag, it means the resource hasn’t changed, and instead of sending the expected response, your HTTP library will simply send a 304 Not Modified response. The client will then use its cached copy even though it’s considered “stale” because the resource wasn’t modified and there’s no need to download it again. Request sent to server on subsequent calls, but tiny response sent back, until the resource changes. This is the default caching policy of most HTTP libraries in server side web frameworks.
  4. Least aggressive – From the server, use the “no-store” attribute of the Cache-Control header. The browser will always complete a full request and response, never even consulting the local cache, even if the ETag header would have revealed that the resource didn’t change. Request sent to server on subsequent calls, forever. No caching.

But it turns out that I was assuming that caching and the browser were the same thing. It turns out, browsers are implementing a universal caching standard, so that developers that know to set the response headers accordingly don’t need to worry about how the caching is done. This is great if you’re talking to browsers, but we may be serving up these responses from the UAT to non-browser clients, like AWS Lambdas, etc. This means that we would need to implement our own caching. We found that AWS’s API Gateway service, which controls access to the Lambdas, has a feature to perform caching, but it does cost money and has some limitations (disk space etc). We will need to research API gateway or consider other ways to address this caching issue of our UAT project.

The second major problem we have with the UAT is to decide on a format for the queries we need to have it perform. We need to have something that encompasses all of Engineering.com’s needs. However, we want to employ the same tactics we did when we created the VAT (Visual Analysis Tool), in that the tool should be adaptable and the staff should be able to make their own queries etc. We think we can add a feature to the VAT to export the results of its queries not as CSV and pretty graphics, but as the JSON format HTTP request body we would need for a UAT query. To pull this off, we’ll need to consider what parameters the the UAT queries need, and how the staff can pull them dynamically as the thing runs, given that the VAT gets these parameter values from the staff sitting in front of the screen and entering them in (ex. userId, authorId, country, etc).


by Matt at January 03, 2017 09:44 PM

December 21, 2016


Laily Ajellu

Overlaps between SEO and Accessibility

Search Crawlers vs. Screen Readers

Search Engine Optimization is done by adding code in your webpage for search engine crawlers to understand the content and elements on the page. Screen readers do something very similar. Even though the two mostly read different parts of your code, some code, like headers in HTML, are read by both.

How a Web Crawler Works

How a Web Crawler Works

Similarities

Both Search Crawlers and Screen readers:
  • Read:
    • URL name and structure
    • HTML code looking for SEO/Accessibility keywords and their values
    • Header tags
    • Video transcription (Having a text script of a video)
    • Image captions
    • Title tags
    • Colour and size, and contrast of text
    • The order of your content
    • Your method of navigation, table of contents, breadcrumbs
    • Link anchor text (linking to another part of the same page)
    • Alt attribute on an img tag
  • Care about “Findability”, how easy it is to find your content when someone is searching through a search engine or is already on your site.

Differences

Crawlers

  • Read meta tags

Screen readers

  • Read ARIA tags

Because of these differences, you can’t assume that if you’ve optimized one you’ve implemented the other as well. But they are very conceptually similar.

URL and File naming for SEO and Accessibility

Follow these guidelines when creating accessible and optimized URL naming and file naming:
  1. Keep them short and meaningful

    URLs are a part of how searchers choose which website to go to
    • When a screen reader is reading out each URL on a page, what is the most important information for the user to know? They may not wait for the whole URL to be read
  2. Use key search words

    Search terms in URLs are read by crawlers and screen readers



  3. Don’t create many levels within levels of content. Keep content closer to the root DOM element


References:

  1. What You Should Know About Accessibility + SEO, Part I: An Intro - Laura Lippay
  2. Web Accessibility Can Boost Your SEO


by Laily Ajellu (noreply@blogger.com) at December 21, 2016 07:35 PM

Inclusive Design Techniques - Advanced Accessibility

How to approach Inclusive Design

When designing an accessible webpage, start by addressing problems that users have with similar web apps. Ensure that users have access to all content, and that these common problems are solved first. After this, continue to implement accessibility by following WCAG or other general design guidelines.

This approach is similar to designing a secure website. Known vulnerabilities should be closed up first.



History of Accessible Design

Web developers used to follow: Graceful Degradation.

  • Building for the majority of users instead of all users
  • Implements accessibility as a hack, patching a complete website

Web developers now follow: Progressive Enhancement

  • Building websites based on organized, efficient and accessible content
  • Gets content out there for everyone ASAP
  • Considers accessibility a part of core functionality
  • Adds “nice to have” features afterward



Choosing Accessible Software for your Web App

Because most web apps are created by sewing other packages into the company's own code, it is important that other software that is being used is also accessible.


Here are some questions ask yourself when choosing software to build on:
  • Please demonstrate how to use this software without a mouse
  • How did you test the software with users with disabilities?
  • Does the software have a Voluntary Product Accessibility Template (VPAT)?
    • VPATs are a set of tables that describes what accessible features the web app has
    • It can be broken down into 4 tables (according to an example by the U.S. Department of State)
      • Diskeeper Summary Table
        • Lists which tables below are applicable to the software
      • Software Applications and Operating Systems
      • Telecommunications Products
      • Video and Multi-media Products
      • Self-Contained, Closed Products
      • Desktop and Portable Computers
      • Functional Performance Criteria
      • Information, Documentation and Support
      • Web-based Internet Information and Applications


Example Diskeeper Summary Table

References

1. U.S. Department of State - VPAT
2. Advanced Web Accessibility

by Laily Ajellu (noreply@blogger.com) at December 21, 2016 05:56 PM

December 20, 2016


Henrique Coelho

Prototyping a calculated field on MongoDB for quick access

The next phase of our project will be a content recommendation system for the users who visit our website: we will consider their past preferences (article category, for example) in order to recommend new content. This system needs to be fast and not use the database unnecessarily, since it will be used for every visit of every user. Considering that all the data we gather from our users are spread among several collections in our database, we cannot afford to make an expensive, slow operation with joins; we need a way to make this operation fast and cheap.

Calculated values are a great way to turn expensive and slow operations into very simple queries, however, they have a drawback: how to keep them synchronized? Our solution for this problem was using a collection that contains all the hits made by a user, which we called a "session" (a session contains many hits); every time the user makes a new hit, we use the information from this hit to improve the history we have in the session - it also ensures that the calculated fields will always be up to date.

For example, assuming this is our current history for the user:

Session
{
    history: {
        visitedIds: [1, 2, 3, 4, 5, 6, 7, 8],
        articlesVisited: 5,
        videosVisited: 3,
    }
}

The history says that we visited 5 articles and 3 videos; the IDs (of the articles and videos, assuming they are stored in the same collection) visited are 1, 2, 3, 4, 5, 6, 7, and 8.

If the user makes another hit in another article (say article #9), the history in the user's session would be changed to:

Session
{
    history: {
        visitedIds: [1, 2, 3, 4, 5, 6, 7, 8, 9],
        articlesVisited: 6,
        videosVisited: 3,
    }
}

Changes like these are very easy to do with MongoDB. For pushing the new ID in the array, we can simply use the $push (not unique values) or the $addToSet (unique values) operator:

db.sessions.update({
    _id: <session id>
}, {
    $addToSet: {
        "history.visitedIds": <article id>
        // In our case, the article id would be "9"
    }
});

Likewise, it is easy to increment values, like for articles visited using the $inc operator:

db.sessions.update({
    _id: <session id>
}, {
    $inc: {
        "history.<field to increment>": 1
        // In our case, the field to increment would be "articlesVisited"
    }
});

Joining them together:

db.sessions.update({
    _id: 
}, {
    $addToSet: {
        "history.visitedIds": <article id>
    },

    $inc: {
        "history.<field to increment>": 1
    }
});

This takes care of maintaining the calculated fields up to date with a simple operation.

Now we get to another detail: the calculated field we are keeping is not in the exact format we want it to have; for example: instead of just the raw numbers of visits a user made couldn't we have it in percentage? This would help us to group them in clusters, if we so desire; for example:

Session 1
{
    history: {
        visitedIds: [1, 2, 3, 4, 5, 6],
        articlesVisited: 4,
        videosVisited: 2,
    }
}

Session 2
{
    history: {
        visitedIds: [1, 2, 3],
        articlesVisited: 2,
        videosVisited: 1,
    }
}

Despite the user from Session 2 having less visits than the user from Session 1, their preferences are actually similar: they visited twice more articles than videos. we could abstract these preferences like this:

Session 1
{
    history: {
        visitedIds: [1, 2, 3, 4, 5, 6],
        articlesVisited: 4,
        videosVisited: 2,
    },
    affinity: {
        articles: 66.666,
        videos: 33.333,
    }
}

Session 2
{
    history: {
        visitedIds: [1, 2, 3],
        articlesVisited: 2,
        videosVisited: 1,
    },
    affinity: {
        articles: 66.666,
        videos: 33.333,
    }
}

This could be done after we pull the data, on the server, or directly in the database. If we do it in the database, we can use the aggregation framework on MongoDB to make this calculation:

First, we get the total number of visits. For this, we can use the $project operator to sum the number of visits on articles and videos:

db.test.aggregate([
    { $project: {

        _id: 1, // Keep the ID

        history: 1, // Keep the history

        // Creating the "totalVisits" field by adding the visits together
        totalVisits: { $add: [
            "$history.articlesVisited",
            "$history.videosVisited"
        ]}
    }}
])

This would be the result:

Session 1
{
    history: {
        visitedIds: [1, 2, 3, 4, 5, 6],
        articlesVisited: 4,
        videosVisited: 2,
    },
    totalVisits: 6,
}

Now that we have the total of visits, we can do some arithmetic ($multiply and $divide for multiplication and division) to find the percentage of the categories with another $project:

db.test.aggregate([
    { $project: {
        _id: 1,
        history: 1,
        totalVisits: { $add: [
            "$history.articlesVisited",
            "$history.videosVisited"
        ]}
    }},

    { $project: {

        _id: 1,

        history: 1,

        // We don't project the totalVisits here, if we want to omit it

        affinity: {
            articles: { $multiply: [
                { $divide: [
                    "$history.articlesVisited", "$totalVisits"
                ]},
                100
            ]},

            videos: { $multiply: [
                { $divide: [
                    "$history.videosVisited", "$totalVisits"
                ]},
                100
            ]}
        }
    }}
])

And this will be the result:

{
    history: {
        visitedIds: [1, 2, 3, 4, 5, 6],
        articlesVisited: 4,
        videosVisited: 2,
    },
    affinity: {
        articles: 66.333,
        videos: 33.333,
    }
}

In this example, the categories were "hard coded": we will have more than "articles" and "videos", but this example was only to show that what we are envisioning can be done: we only need a more elaborated schema and a more intelligent algorithm.

by Henrique Salvadori Coelho at December 20, 2016 04:04 PM

December 19, 2016


Matt Welke

Documentation and Improving the Timelines

This week we’re mostly wrapping up our work before the break. My team mate worked on documentation while I spent time learning more about Webpack and Babel, the JavaScript tooling we’re using to “compile” our programs. As the semester went on, I’ve lost my mental picture of how they work (they’re considered quite hard to understand for those new to modern JS development) but luckily after reviewing by building a small React program from scratch, I got a better grip on how it works.

We’ll be adding more to the Timeline component of our VAT (Visual Analysis Tool). I’m creating the ability for the user to get more detailed info about an event on the time line by clicking on it, which will pop up a “modal”, similar to our User Guide. This work isn’t done, I have to refactor some code to get it working right, and once it’s working, we’ll be improving how the Timeline looks too (icons, etc).


by Matt at December 19, 2016 10:39 PM


Laily Ajellu

Introduction to ARIA for HTML

Why care about Accessibility?

Have you ever tried to use a website with your eyes closed, or with the screen turned off? You have no context of what is going on or what you’ve clicked. People with disabilities use screen readers - apps that read out the screen to you.

In the beginning it can be a nightmare of overlapping words and vague descriptions like “button”, leaving you with no idea what the button does. But a properly coded website labels its buttons and other components so you hear something like: “signout button” instead.

Isn’t that clearer?

Who are the Target Users?

  • Search Engines
  • Blind users
  • Dexterity-impaired users
  • Users with cognitive or learning impairments
  • Low Vision users
  • Motor Impaired users
  • Colour Blind users
  • Deaf Users
  • Cell phone/mobile Users
  • Temporary Disabled Users
  • Unusual Circumstance Users
  • Users on a website while multitasking
  • Children and novice-internet users
  • Seniors


How do I Start Coding?

ARIA - Accessible Rich Internet Applications provides a syntax for making HTML tags (and other markup languages) readable by screen readers.

The most basic aria syntax is using roles. Roles tell the screen reader what category the tag belongs to - eg. a button, menu or checkbox.

Using Roles

In HTML, use elements with specific usage. Don’t just use a div if you need a checkbox, that way it will have some accessibility features already built into it.
eg. <input type=”checkbox” role=”checkbox”>
not <div role=”checkbox”>

Tips:

  • The role of the element more important than the HTML tag it’s on

  • Do not change a role dynamically once you set it, this will just confuse your users


What’s Next? Establish Relationships

These are the aria attributes that establish relationships between different tags:
  1. Aria-activedescendant
  2. Aria-controls
  3. Aria-describedby
  4. Aria-flowto
  5. Aria-labelledby
  6. Aria-owns
  7. Aria-posinset
  8. aria-setsize

Aria-describedby & Aria-labelledby

  • Explains the purpose of the element it’s on
  • Most commonly used, and most useful
  • Create a paragraph tag with the label/description info and place it somewhere off the page


CSS recommended:



The great thing is that you don't' have to add any css to hide the paragraphs pointed to by aria-labelled by and aria-described by.
All you have to do is add the property `hidden` to your html tag!
Reference: Hidden attribute

Code Example



Aria-activedescendant

  • Shows which child is active
  • Must be on a visible element
  • Must be someone’s descendant
  • Or must be owned by another element using aria-owns


eg. on a textbox inside combo-box

Code Example



Aria-controls

  • If you click or change the value of one element, it will affect another


Eg. If you click a button Add, number will be increased by 10

Code Example



Aria-flowto

  • Indicates which element to look at/read next
  • Doesn’t affect tab order
  • Only supported by FF and ie
  • Reads flowto only when you press = key so it’s not very useful
  • Can flow to > 1 element


Code Example



Aria-owns

  • Indicates who the parent of a child is
  • Do not use if parent/child relationship is in DOM
  • A child can only have 1 parent


Code Example



Aria-posinset & Aria-setsize

aria-posinset
  • Indicates the position of an item in a set
  • Don’t use it if all the items of the set are already present (browser calculates)
aria-setsize
  • Number of items in the whole set


Code Example



Change Aria Properties Dynamically (except Roles!)

  • Eg. Aria-checked on the chosen checkbox
  • Keeps the user up to date with page changes


Make Keyboard Navigation Intuitive

  • Enable navigation using up and down arrow keys
  • Enable select with space and enter


Review of Aria Process

  1. Choose HTML tags that are more specific to your needs
  2. Find the right roles
  3. Look for groups and build relationships
  4. Use states and properties in response to events
  5. Make Keyboard Navigation Intuitive


Reference: https://www.w3.org/TR/wai-aria-primer

by Laily Ajellu (noreply@blogger.com) at December 19, 2016 07:27 PM

December 14, 2016


Matt Welke

Moving onto the UAT

I completed a suite of unit tests for our CSV export feature of the VAT (Visual Analysis Tool). They involve going accross all the collections and doing many possible types of joins. It’s impossible to create a unit test for every combination of collections joined and columns included, but I aimed to at least connect every type of collection to every other type of collection (Hit joined with Browser was good enough, no need to do the other four “media” type collections) and do a join that included just the id column from the joined table and a join that included the id and other columns from the table. In total, we have over 50 CSV export unit tests that we can run to make sure the data we get back makes sense if we change the schema.

We completed the feature to keep our copy of the User information in our MongoDB up to date with the copy in their SQL Server. We originally planned on creating an API to handle this. When we received a hit to log in our database, we would call the API which would access their database and retrieve the extra user information. This way, we would always be up to date. However, we realized that because we were already putting the SQL Server user ID on the web page (to be picked up by our JavaScript code that logs things), it would be trivial to just add the rest of the user information to the generated web page too. Then, the JavaScript we already created to mine the data can just look for these additional pieces of information and log them too. This way avoids the need to create another API for this feature. In the end, less is more, right?

We’re beginning to think about the UAT (User Affinity Tool) at this point. While we have time before we begin working on it, it would be nice to have our ideas digest over the break before we return for the second consecutive coop term and begin work on it. We need to think about where the information comes from that is used in the formula to recommend things.

Should we use all hits for that user? This is safe, but prevents us from using the hits the person had before they registered as a user. What if they never register? They would be ignored.

Should we include all the hits in a shared session (for example with someone using a shared computer)? This would get us lots of information, but it may be less accurate. It would be less personalized.

We began thinking of the pros and cons of these approaches, and we’ll continue to brainstorm before we get to the point that we begin this work.


by Matt at December 14, 2016 10:48 PM

December 12, 2016


Matt Welke

Unit Testing the VAT

Unit testing is fun. Unit testing when you have to emulate a browser doing HTTP requests is even more fun. Unit testing when you have to emulate a browser *logging in* and then doing more HTTP requests is… etc.

I got some practice using the Mocha JavaScript unit testing framework today while I finished creating some unit tests for our CSV export feature. We ended up having to code in config variables to disable features like the login system and the new heartbeat feature (which my team mate created to improve the stability of the system) just to get my unit tests to run. But in the end, I was successful:

unit_tests

It’s just a start. Most of my time was spent getting the testing going and creating the helper functions I’ll use to speed up making more tests. We can now quickly create more tests later on to make sure our MongoDB joining works well, and to make sure changing our schema if we need to doesn’t break the joining functionality.

One thing I’m bad for is writing code blindly, never using TDD (test driven development) or BDD (behaviour driven development) techniques. I think they’re good in theory. They can help avoid feature creep (by letting you know when you’ve coded “enough”) and allow you to retest your code as you go, giving you confidence you didn’t break anything. So I’m glad I got to practice this. We’ll need to have a thorough unit testing system in place when we ship anyways.

As we shift into the UAT (User Affinity Tool) portion of the project, which will involve writing more code, I will try to write unit tests ahead of time before coding the features, or perhaps code unit tests while my team mate codes features etc.


by Matt at December 12, 2016 10:25 PM


Henrique Coelho

Simulating Inner Joins on MongoDB

Probably one of the most important features in SQL, for more complicated queries, is the ability to join data from several tables and group them; for example: having a table of users and a table of messages, and then joining them both to get the users, as well as their messages. There are several types of joins in SQL:

Diagram of joins

Different types of joins

Now, for our project, we are using MongoDB, a NoSQL database - how do joins work, in this case? Say we have two collections on MongoDB that follow this schema:

Users: {
    _id: Number,
    name: String
}

Messages: {
    _id: Number,
    text: String,
    creator: Number // References _id in Users
}


And a sample of the data:

Users: [{
    _id: 100,
    name: "John",
}, {
    _id: 101,
    name: "Paul"
}]

Messages: [{
    _id: 200,
    text: "Hello, how are you?",
    creator: 101
}]

And now I want to get all the messages, as well as the creator's name. Is there an easy way to do this? There is, with Mongoose, we can build these relationships in the schema, and we can use the method populate to join the two pieces together:

Schema:
Users: {
    _id: { type: Number },
    name: { type: String }
}

Messages: {
    _id: { type: Number },
    text: { type: String },
    creator: { type: Number, ref: 'Users' } // References _id in Users
}


Joining:
db.Messages.findOne({})
           .populate('creator')
           .exec((err, docs) => {
               if (err) { throw err; }
               console.log(docs);
           });

This would give us an output similar to this:

[{
    _id: 200,
    text: "Hello, how are you?",
    creator: {
        _id: 101,
        name: "Paul"
    }
}]

Good enough, right? Ok. But the problem is that this is a full left join: if there was a message without a creator, it would still be selected. So, what if I want an inner join? Short answer: you can't. MongoDB does not support inner joins. This is fine for most scenarios: you can simply filter the data afterwards to get rid of the incomplete documents; but it starts to be a problem when you run in to memory issues, which was the problem we faced during the development of a module, and it would be a really big problem. Luckily, we have algorithms by our side!

In our case, execution time is not a big issue, we must do inner joins using many collections (often more than 5), and memory is a limiting factor, so we tried to get the best of this scenario. I designed a module that did the inner joins manually for us and saved as much memory as possible, this is how I did it:

1- The most specific queries with the most sparsely populated collections happen first: if you are looking for "all the users that use IE 6", it is a much better idea to "look for the ID of the IE6 browser in the database, and then fetch the users that have that ID in the entry" than "getting all the users, selecting all their browsers, and then getting only the ones that use IE6".

2- For every query done, we build up more and more conditions for the next query: if you want all the users that use IE6, as long as they live in Canada, you do the query to "find the ID of the IE6 browser, and then you find the addresses within canada, and then you query for the users - but only the ones that match the accepted addresses and browser", instead of simply getting all the users at the end and joining the information.

3- Leave extra information for the end: if you want all the users messages in addition to the users from the previous case, fist you should find all the users that match those conditions and then you find their messages, instead of scanning for all the messages and then joining them with the users that matched the conditions.

4- If a query returned too many results even with conditions, try again later: following the rule #2, it is likely that if you let other queries run, you will end up with more conditions to refine the search even more. For example: if your first search for browsers returned too many results, but the next search, the one for users only returned 1 result, your next query for the browsers will only need to find the browser for that particular user.

Following these 4 rules, I managed to come up with a module that make inner joins on MongoDB for our project: you pass a JSON object with the conditions you want, and it will do the queries for you and join them automatically. For example:

stores.Users.execute({
    Users: {
        name: { contains: 'John' }
    },
    Browsers: {
        name: { contains: 'IE6' }
    },
    Address: {
        country: { matches: 'CA' }
    }
});

The snippet above would select all, and only the users that have "John" in their names, live in Canada and use IE6.

I can't believe it actually works.

by Henrique Salvadori Coelho at December 12, 2016 10:13 PM

December 08, 2016


Matt Welke

VAT Working Well

Today we finished testing the VAT’s join functionality. It turns out the bug I described yesterday wasn’t really a problem. I just forgot to do a “git pull”. Great…

It works really well! It can join any entities in the database together. There are a few missing things like the actions which are embedded in the hits, but we doubt we’ll have any issues getting that to work soon.

We need to start giving thought to how we create the UAT (User Affinity Tool), which will be responsible for augmenting their recommender engine. This will basically mean finding a way to produce meaningful information for their Elasticsearch queries quickly enough to not impede the user experience. This may mean denormalizing data (storing affinity scores which can be read from one spot each hit versus querying a user’s entire history for every new hit). In any case, we’ll need access to the internal user information. Right now that means tapping into their DNN system, which is the CMS they use.

We think we will design this with an API style so that we avoid having our code directly access their database (SQL Server in this case), and then in the future if they change their CMS system, they just need to make sure the API can access it and return the user info to us in the same way as when we develop our system. It will likely be a very simple API so we think we’ll use AWS’s Lambda feature. This may be its time to shine. It’s great for simple API’s which are called business-to-business (not user-facing) because there’s no need for authentication or rate limiting, because you know who’s calling it. Something this simple can fit into one function. Lambda can even be economical because they charge by the 100ms of your function’s execution time. You don’t need to keep an EC2 instance running 24/7. If your function would be hit constantly so that you’re constantly incurring charges, then at that point it might make sense to just roll an instance instead. We’ll be crunching those numbers too, but we were told that AWS fees really aren’t a big concern for us so we’re not too worried. If anything, I’d love to use Lambda because it’ll help me get some practice in creating systems with the serverless architecture.

I’m excited for the UAT portion of the project because I get to see how one goes about connecting various web services together and using machine learning techniques like Elasticsearch’s capabilities. These are definitely skills I can take with me in the future in jobs and my own projects.


by Matt at December 08, 2016 10:58 PM

December 07, 2016


Matt Welke

Testing the Filter System

Today I researched a few more ideas for information to gather during each hit to the Engineering.com website, since my work on the user guide should be done for now before our Thursday demo. My team mate suggested I also begin testing the filter system he created (which lets people build their own queries from scratch, using joins etc from Mongo). It’s a good thing we decided to test this, because though the csv export feature is working great, including limits to prevent unresponsiveness etc, it isn’t behaving properly with the joins. I completed a variety of tests which involved no joins/one join/many joins with and without filters, and the combination of joins and filters causes it to throw errors, and it can’t do well with more than one join either.

We’ll try to debug this issue tomorrow before the demo.


by Matt at December 07, 2016 11:41 PM


Henrique Coelho

Recoving data from Addthis

Long story short, Addthis is a tool that allows you to easily add social media follow and share buttons to your website; we wanted to detect when a user shares a page, follows our page, or comes from a post that was shared on social media. Luckily, Addthis offers a simple API to detect these actions: when you include Addthis in your page, it makes a global object in the window called "addthis", which can be accessed by any other script in the page.

Detecting shares

To detect shares, we listen for an event called "addthis.menu.share" on the addthis object, then we can recover the service (social media) the resource was shared on:

addthis.addEventListener('addthis.menu.share', service => {
    const serviceName = service.data.service;
    console.log(serviceName); // facebook, twitter, etc...
});

Detecting follows

This option was not described in the API, but to my surprise, it actually works!

To detect follows, we listen for an event called "addthis.menu.follow" on the addthis object, just like for shares:

addthis.addEventListener('addthis.menu.follow', service => {
    const serviceName = service.data.service;
    console.log(serviceName); // facebook, twitter, etc...
});

Detecting number of shares of a service

To detect the number of shares in one or more services, you can call the function addthis.sharecounters.getShareCounts, pass an array of strings with the names of the services, and a callback that will receive an array of objects (one for every service you passed) with the number of shares. Say you want the number of shares on 'facebook':

addthis.addthis.sharecounters.getShareCounts(['facebook'], shares => {
    const sharesNo = shares[0].count;
    console.log(sharesNo);
});

Detecting if the user is coming from a shared post

It is useful to know when a user is coming from a post that was shared on facebook, twitter, etc. To get this information, Addthis inserts a URL fragment similar to this one:

http://example.com/blog#AHb4gs1hwck.facebook

Where the last bit of the fragment is the name of the service.

Addthis offers another event listener to recover this information, and this is where things get weird: I really don't see why this should be an event, because it is not an event - it is a "constant" information that should be able to be recovered any time, just like the number of shares. Because of this, we decided that we would parse the information from the url ourselves with a snippet similar to this one:

const url = window.location.href;
const regex = /.+#.+\.([a-zA-Z0-9_-]+)$/;
const result = regex.exec(url);
const serviceName = result[1];

This Regular Expression extracts the last bit of the fragment.



Overall, it is a very easy API, but the documentation could be a lot better and some pieces don't seem to fit together well.

by Henrique Salvadori Coelho at December 07, 2016 08:51 PM