Flatiron Blogger Magazine

Would You Date This Line (of Code)? by on 10/17/2017
DATE:LINE by on 10/17/2017
We have always judged people. We just pretended we didn’t. And then tinder came along… My coding journey began with excitement. Coding journey:*Swipe Right* I learned a lot in a short period of time and, yes, I originally judged what I was learning before I even dove in.
Make your life easier by on 10/18/2017
What is workflow?Workflow is defined as “the sequence of industrial, administrative, or other processes through which a piece of work passes from initiation to completion.” In our case, “other processes” refers to putting down working code to make an application. And as a series of processes, they are subject to optimization and improvement. In our case, anything that allows you to create code and applications more quickly and easily can be considered a tool for optimizing your workflow! Here are a couple of my favorite Vim plugins and their closest equivalents in atom. Hopefully there’s one here you’ve never used:Despite it’s cute sounding name, fuzzy file finder is an awesome tool for searching through all of your files. Open this up and start typing at whatever you can remember and it’ll return all of the closest results. It’s not case sensitive and works on all of your inputs, going through file names and folders to always find you the best result. Shown above is what it looks like for my in Vim but there’s one totally similar here!LintersDo you ever find yourself wishing you could spot pesky syntax errors before running your programs? Well now you can! Linters constantly compile your code looking for errors and showing them in a convenient format while you code. It makes finding and changing them incredibly easy and happens in real time, meaning you don’t have to run your program to find out you missed an end:Courtesy of https://atom.io/packages/linterSome editors come with this functionality built in such as Eclipse, but for the rest of us we have to go out and install it. My linter in Vim looks a little different, but they all follow the same rules. Remember to install language linter packages for the specific languages you’re working in! Check it out here!BeautifyAs we stumble our way through our code trying to puzzle together something that works, it’s easy to lose track of indentation, spacing and everything else that makes your code readable. Get rid of the repetitive task of deleting extra whitespace or indenting all your line, with the press of a button beautify formats everything for you.Courtesy of https://atom.io/packages/atom-beautifyThe learning curveAdding new new packages, and learning new keybinds is like learning to ride a bike over and over again, and in some cases require reconditioning yourself to work with something new. Take it slowly and try to learn a few things at a time otherwise it can be overwhelming. It’s often said that once you’re proficient, try to learn one new keybind a a week. And remember, if you find something tedious and repetitive, chances are someone else felt the same way and built a package to fix exactly your needs!
Uploading Files Using CarrierWave In Rails by on 10/17/2017
I’ve been teaching myself Ruby on Rails by using it to build a blog for my buddy. In the process, I‘ve found myself needing to figure out how to manage file uploads, so that the articles he posts will be able to have pictures associated with them. I read through several tutorials, and figured I’d share here what I’ve learned, in the hope that someone else might also find it useful.A few logistical notes, before we get started: I’m running Rails 5.1.4 with CarrierWave 1.2.1 on Mac. Also, while I’ll do my best to explain as much as possible, there are some topics that are beyond the scope of this tutorial. To be able to follow along with this tutorial, you’ll need at least some cursory knowledge of Rails and the MVC paradigm.We’re going to be building a basic blogging application, like the one I’ve been working on for my friend. I’ll be starting with basic setup, which you can skip if you’re implementing CarrierWave into a pre-existing app. The code for this tutorial is also available on Github.Part 1: Basic SetupAlrighty, let’s boot up our Rails app:rails new MyProject -TThe -T flag tells Rails to generate the new application without the default test suite. We’re omitting it here because it is unnecessary for the purposes of this tutorial, but in practice, you would only do this if you were planning on using a different testing framework.We’re then going to open our Gemfile and add some gems. Bootstrap is a styling library that’s going to help us get our site looking decent with minimal effort, and jQuery is a Javascript library that helps with animations and other DOM manipulations:We’ll then install the gems:bundle installNext, we’re going to rename app/assets/stylesheets/application.css to app/assets/stylesheets/application.scss and add the following lines:We’ll also add a few lines to the bottom of our app’s Javascript manifest, located at app/assets/javascripts/application.js:Why are the lines we’re adding commented out? I don’t know. It’s what the documentation told me to do. ¯\_(ツ)_/¯Great! We’re going to need a couple other things before we get started—a controller and a model. Let’s generate those now (the g is short for “generate”):rails g controller Postsrails g model Post title:string body:text image:stringrails db:migrateOur Post model will have a title, a body, and an associated image. Note that the image field is a string because it will be storing the reference to a given image’s filename, not the image file itself.We can then then set up our routes file to point to our newly-created controller:resources denotes that we’ll be using RESTful actions for our Posts controller, while root to: tells the browser which page to display as the “home page” for our site… in this case, the #index method within the Posts controller.And set up our controller to point to our soon-to-be-created views:Now we can get to work designing our layout. We’re going to be creating a bunch of views and partials — Let’s start by creating a partial at app/views/layouts/_navigation.html.erb that will hold the design for our navbar:There’s a whole lot of Bootstrap going on here — if you’re unfamiliar, I’d recommend checking out Bootstrap’s excellent documentation.In order to display this partial, we’re going to need to render it within our main application layout, located at app/views/layouts/application.html.erb:We can then create our index view:And our view for creating new posts:And add a corresponding partial for displaying individual posts:As well as another partial containing the form we’ll use to create said posts:Lastly, we’ll make a view for editing images:And another for displaying them (via show):Phew! Notice how we’re using @post.image.url within the image_tag to point to the filename of the image for the post that we’re showing. Also the image? method used in @post.image? checks whether an attachment is present at all (since the image method itself will never return nil, even if no file is present).Part 2: Getting Started With CarrierWaveNow, let’s begin working with CarrierWave. In the terminal we run:rails g uploader ImageThis generates a new file at app/uploaders/image_uploader.rb, which contains the configuration for CarrierWave. Note that this file contains a lot of comments and useful examples, so much of our interaction with it will consist simply of uncommenting out certain lines.If you open it up, near the top you should see the line storage :file . This indicates that the uploaded files will stored locally, by default in in public/uploads directory. (You can also configure CarrierWave to store files in the cloud, which is useful when working with platforms like Heroku that don’t allow you to store files.)We’ll now need to include, or mount, the uploader into the model we generated earlier. Open up models/post.rb and add the following line:And that’s it! Nice and easy. Now if you boot up your rails server:rails sAnd navigate to localhost:3000 in your browser, you should be able to upload files and create new posts. Go ahead, give it a shot!Part 3: Customization & Best PracticesWhile our simple setup is a suitable starting point, as it stands our application is not ready for public use. What if the user uploads a huge file, or uploads a file which isn’t an image? We need to validate and process the files they upload to make sure there’s no funny business.Since we’ll only be working with images in certain file formats, let’s add the following lines to app/uploaders/image_uploader.rb (some of them should already be there, commented out):The %w(…) is Ruby shorthand for creating an array with the items contained in the parenthesesWe can additionally check a file’s size, but we’ll need to require an additional gem to do so. Let’s add that now:And install it:bundle installNow we can check to make sure the uploaded image is less that a certain size (in this case, 2 megabytes):That should be it! Now, if the user tries to upload an image that doesn’t meet our requirements, it won’t work — but it also won’t tell them why, which could be frustrating. To fix this, let’s give our app the ability to display error messages.First, we’ll need to add the text we’ll want our error messages to display at config/locales/en.yml:Then we’ll need to configure our views to display them. Let’s create a new partial at app/views/posts/_errors.html.erb:And render this partial with our app/views/posts/_form.html.erb partial (I added it at the bottom):Great! Now the user should get an error message beneath the form if they try to upload an image that violates our criteria.This solves our first problem, but there’s another issue we should also try and tackle. Say the user uploads an image that is very large, but nevertheless within the scope of what we allow. On certain pages, we might want to display a smaller version of that image. We could do this with css rules, but loading such a large file to display at such a small size is inefficient, and will unnecessarily slow down the responsiveness of our webpage. A better solution would be to tell CarrierWave to generate a smaller, thumbnail version of any file the user uploads.To do this, we’re going to need a gem called MiniMagick, which in turn relies on another tool called ImageMagick. So before we get started, we’re going to need to make sure ImageMagick is installed:brew install imagemagickWe’ll then add MiniMagick to our Gemfile:And include it into our uploader. There should be a line near the top that you can just uncomment:Now we need to tell the uploader what to do. Fortunately there’s another line a little farther down that we can again just uncomment and tweak to our liking:We can tell our views to display the thumbnail by using .image.thumb rather than just .image . For instance, we can tweak our _form partial so that now at the top it reads:And that’s it! Now any files the user uploads will automatically have a thumbnail version generated and stored alongside the original image.Part 4: ConclusionThat’s all I’m going to get into for now. There was a lot we didn’t cover, including remote uploads, multiple uploads, and cloud storage, and further ways to process the uploads. If you want to learn more, I recommend looking through the documentation as well as this particular tutorial here, which is excellent and from which much of my own knowledge on the subject was derived. CarrierWave also has a Wiki, which has the answers to many frequently asked questions.Sources:https://code.tutsplus.com/articles/uploading-with-rails-and-carrierwave--cms-28409https://scotch.io/tutorials/file-upload-in-rails-with-papercliphttps://code.tutsplus.com/tutorials/rails-image-upload-using-carrierwave-in-a-rails-app--cms-25183https://github.com/carrierwaveuploader/carrierwavehttps://getbootstrap.com/docs/4.0/getting-started/introduction/
The Mystery of #inject in Ruby by on 10/12/2017
Earlier this week we were given a problem to solve involving modeling the relationships in the setting of a bar. The problem was this; a bar wants to keep track of how much money each bartender is making, how much money each customer is spending, what drinks are being sold and the cost of each drink based on the addition of the costs of the ingredients.So first we decided to draw out all the relationships. We knew we had to have a table for bartenders, customers, drinks and ingredients. This led us to decide that a bartender has many drinks, a customer has many drinks, and a drink has many customers and bartenders. A bartender, also, has many customers through drinks, and a customer has many bartenders through drinks. And finally, drinks have many ingredients. While this may seem complicated, we were able to make a diagram after many attempts. It looked like this.Relationships between bartenders, customers, drinks, and ingredientsThe single-sided arrow represents a has many relationship between drinks and ingredients. The double sided arrows represent a many to many relationship. The dotted line between the customers and the bartenders represents the has many through relationship, since they are indirectly connected through the drinks that are bought and sold.Next, we decided to work on how we would translate this diagram into SQLite3 tables. At first, in addition to the basic characteristics of each subject, we were connecting bartenders and customers through foreign keys on drinks table. The SQL code looked like this.###SQLCREATE TABLE drinks ( id INTEGER PRIMARY KEY, name TEXT, price INTEGER, bartender_id INTEGER, customer_id INTEGER);However, this creates a major problem in trying to model real life. With this setup a drink would only be able to be used once in the drinks table, because it would have a specific bartender and specific customer it would belong to through their foreign keys. While it would work to have a drink_id for each ingredient to keep track of the drinks they belong to, it would not work for a many to many relationship.One way to fix this is to create multiple drinks in the table that have the same name, but have different customer and bartender ids. But, this is not very useful. Having multiple instances of a drink would lead to errors when searching for a drink by name. Suppose we have two drinks in our table both with the name “beer”. If we are trying to search and find the beer that Joe bought, how would SQL be able to tell the difference between that beer and the beer Mike bought. In addition to the name of the drink, you would have to include the id of the customer into the search, which takes extra programming. And as programmers we are lazy!(credit: Flatiron School)Instead, my partner, Yakov Kiffel, suggested we make a fifth table that would hold the id of a bartender, drink, and customer. This would be a transaction table. Little did he know he had made a join table. Neither of us had learned about join tables yet, but Yakov had been able to come to this conclusion all on his own.Join tables and the BarJoin tables are used in SQL when modeling many to many relationships. It is used between two or more classes by creating a new table that holds the foreign keys of these classes. One example that I mentioned earlier are bartenders and drinks. Drinks can have many bartenders make them, and bartenders can make many different drinks. The updated diagram that we then created looked like this.Updated diagram including a join table of transactions for bartenders, customers, and drinksWith this new table we would be able to model a transaction more realistically. The code we came up with for our tables was this.###SQLCREATE TABLE bartenders ( id INTEGER PRIMARY KEY, name TEXT);CREATE TABLE customers ( id INTEGER PRIMARY KEY, name TEXT);CREATE TABLE drinks ( id INTEGER PRIMARY KEY, name TEXT, price INTEGER);CREATE TABLE ingredients ( id INTEGER PRIMARY KEY, name TEXT, cost INTEGER, drink_id INTEGER);CREATE TABLE customer_bartender_drinks ( id INTEGER PRIMARY KEY, bartender_id INTEGER, customer_id INTEGER, drink_id INTEGER);But what happens if the same customer, orders the same drink, from the same bartender? This would create two identical rows in our join table. While we didn’t get around to it, it would be smart to add another column to our join table that would represent the timestamp of the transaction. This timestamp would allow the same order between a bartender and customer more than once.So to test this code out, I made a database called bar.db and created these tables. I used this code to create a few rows in each table. (I omitted the ingredients table just to keep the example simple.)###BartendersINSERT INTO bartenders (name) VALUES ("Bob");INSERT INTO bartenders (name) VALUES ("Rick");###CustomersINSERT INTO customers (name) VALUES ("Joe");INSERT INTO customers (name) VALUES ("Mike");###DrinksINSERT INTO drinks (name, price) VALUES ("jack and coke", 9);INSERT INTO drinks (name, price) VALUES ("beer", 7);INSERT INTO drinks (name, price) VALUES ("LI iced tea", 12);###Customer_Bartender_DrinksINSERT INTO customer_bartender_drinks (bartender_id, customer_id, drink_id) VALUES (1,1,2);INSERT INTO customer_bartender_drinks (bartender_id, customer_id, drink_id) VALUES (1,1,1);INSERT INTO customer_bartender_drinks (bartender_id, customer_id, drink_id) VALUES (2,2,1);INSERT INTO customer_bartender_drinks (bartender_id, customer_id, drink_id) VALUES (2,1,3);INSERT INTO customer_bartender_drinks (bartender_id, customer_id, drink_id) VALUES (1,1,3);So after inserting these test values I decided to try some of the problems given to us. Here is one example.Finding how much each bartender madeTo find out how much a bartender has made we have to look into our join table and see which drinks are associated with their id. After that, we would have to look into the drinks table and see how much each of the drinks they sold cost. The code to accomplish this looks like this.###SQLSELECT bartenders.name, SUM(drinks.price) FROM bartenders JOIN customer_bartender_drinks ON bartenders.id = customer_bartender_drinks.bartender_id JOIN drinks ON drinks.id = customer_bartender_drinks.drink_id GROUP BY bartenders.name;We SELECT the name of the bartender and the total SUM of the drinks they have made, to be returned. We then JOIN the bartenders and customer_bartender_drinks join table ON the id of the bartender equalling the bartender_id foreign key in the join table. Then, we have to JOIN the drinks table and the customer_bartender_drinks join table ON the drinks id equalling the drink_id foreign key. And finally, we want to GROUP our transactions BY the bartenders name. The result was this.name SUM(drinks.price)---------- -----------------Bob 28Rick 21Join tablesIn conclusion join tables are useful for modeling many to many relationships between two or more classes or subjects. They allow for an object (like a drink), to belong to more than one other object (like bartenders), by creating a central location for foreign keys (drink_id, bartender_id), that correspond to each other. This makes databases more realistic and allows for easier querying of the database.
To SQL or to NoSQL? That is the Question by on 08/07/2017
When I first was learning about SQL and relational databases it seemed like the one of the easiest programming concepts to grasp. Having a database full of tables, each corresponding to a class, and each with columns representing attributes just made sense. How else would you store your data? It seems like the most logical way since we are all used to data being in tables. Well, as progress has been made in computer science as a whole, limitations to this SQL based system have come up. And with these problems have come solutions in the form of NoSQL databases. I first heard about NoSQL databases when looking online about different web development frameworks, specifically MEAN. The database in MEAN is MongoDB, and when looking up what it was I realized it was a NoSQL database. This inspired me to try and figure out exactly what that meant and how it would work.What is a NoSQL database?A NoSQL database is one that forgoes the tables with set schema and set attributes for a more flexible and less structured database. One of the main differences between the two types of databases is the way they store the data. Relational databases rely on huge tables full of instances of objects, and these have references, called foreign keys, to link to instances of other classes. In NoSQL databases, most of the time all the data for a single object is stored in one file. In addition to having a reference to another object, they would have that other object within the file. This can be both good and bad, but I’ll get to that later. They also have no set schema, meaning there are no specific parameters that each instance of a class are required to have. Attributes can be added or removed while the database is in use. To interact with a database it is queries similar to SQL queries are made. Each NoSQL database has an API interface in order to access data and write data. There are many different kinds of databases, but I will talk about key value databases and document databases.A key value database is similar to the hash datatype used in Ruby. Each file in the database corresponds to an object, and within that file there is a hash that holds all the attributes for it. These can contain nested hashes or even nested arrays, depending on the needs of the programmer.Document based databases are directly related to key value databases, but the main difference is the organization and the ability to read the data. Normally the data in key value databases is opaque, meaning that the database does not know anything about what is in the file. However, in document based systems the database relies on metadata within the file to find the correct information. This metadata is usually in the form of tags to label the information. The data in a document based system is also stored very differently. Instead of being a hash of keys and values it can be stored in different formats such as JSON, XML, YAML, or BSON. This makes it more flexible, and makes querying much easier with a document based system. The program is able to make more specific queries based on the content of the documents.So why were these databases created?What problems led to the boom in NoSQL databases? Well one drawback is that relational databases must always have a set schema. Why would this be a drawback? I think we would all agree that the structure of relational databases is what makes it so easy to learn and understand how we can query our database. This actually creates a problem for programs and websites that are constantly updated, and that need to store new data with different datatypes. Every time new information needs to be added to the schema, a migration has to occur. This means migrating the whole database to this new structure which requires the server to be down, possibly for a long time depending on the size, while it is updated. This means as a user of a program, you may not be able to access the program.Another drawback of SQL databases is how they scale. As the database gets bigger it scales vertically. This means that the data is all on one server, which is responsible for everything. This limits the how big the database can get. It also means that more powerful servers are needed to keep the database running. Although it is possible to shard the database, (spread the work from one server to many), this requires extra programming work in order to make the network of servers.Caching data is another problem that relational databases can have. Usually a SQL database will not cache the results of queries, but will only cache the query path. A distributed caching system to hold information that is used most often can be added as a layer to a relational database, but this adds more work for programmers and makes the system more complex.How do they solve Relational database problems?So how does a NoSQL database fix these problems. A major problem they solve well is the need for an ever changing schema. A NoSQL database can store data in many different ways, and new attributes of an object can be added or removed. This allows for a dynamic schema that can be changed on the fly, without having to take a server offline to migrate a new schema.Sample document file from MongoDB docs...{ _id: "joe", name: "Joe Bookreader", addresses: [ { street: "123 Fake Street", city: "Faketon", state: "MA", zip: "12345" }, { street: "1 Some Other Street", city: "Boston", state: "MA", zip: "12345" } ] } It also allows for objects of the same class to have differing amounts of information. For example, PersonA might have an address for home and work, while PersonB may only have one for home. PersonB doesn’t have a nil value for his/her work address like a relational database would have, they just don’t have that information. There are no empty fields for PersonB, which greatly reduces memory usage. Even though the table would only have a nil value in PersonB’s work address column, this nil still takes up space in the database.NoSQL databases naturally scale horizontally, meaning that they can be spread out through multiple servers. Instead of needing extremely powerful servers, a group of commodity servers can evenly split the querying and workload automatically. Not only is this cheaper, but it is possible to use services like AWS (Amazon Web Services) to host a database for an application.Caching is handled much better in NoSQL databases. Most of these databases support integrated caching. This allows the database to keep the most used and accessed data in memory, at all times, for quicker queries. With this information already kept in memory, the query does not even need to go all the way to the database.This seems too good to be true!And it actually is. If NoSQL databases fix all the limitations of SQL databases, why isn’t everyone using them? Well with all the improvements that have been made, there are still several problems with these dynamic databases. One major problem is the lack of ACID transactions. While some NoSQL databases claim to adhere to ACID standards, they are usually not following it fully.What is ACID, and why is it important? ACID is a set of guidelines that database transactions should abide by, in order to prevent bugs or incorrect data in the database. It stands for Atomicity, Consistency, Isolation, and Durability. Atomicity refers to a transaction either having all parts occur, or no parts at all. If one part of the transaction fails, then all other parts should be rolled back and reverted to their original state. Consistency means that the database should only be changed from one valid state to another. This means the transaction should be valid, given all rules and constraints set forth by the database and programmer. Isolation means that given a certain transaction, the result of it being executed concurrently should be the same as if executed sequentially. Durability refers to the idea that the database should record the transaction and commit it to memory, even if the server loses power or shuts down unexpectedly after the transaction. This is done using non volatile memory stores that do not require energy in order to persist the memory. Without following these four guidelines, the database is susceptible to errors and crashes.Most NoSQL databases also do not support joins in their querying. There are some ways that work around this, but they can lead to errors. One work around is that instead of doing one query to try and get all the related information needed, it is possible to do many smaller queries. This is possible because NoSQL queries are usually much faster, meaning that even if multiple queries are needed to get certain results, they can still be quicker than doing a join query in relational database. What I mentioned earlier about storing the data of other objects in addition to references to these objects can also solve this lack of joins. This allows one query to a single object to get all relevant data, such as a blog post and all its comments. But this leads to another problem called eventual consistency.Eventual consistency is an idea that came about because of the lack of ACID standards. So let’s say a NoSQL database has all the information for a blog post in one file. The data in the file, such as the comments and the usernames of people who wrote comments, might exist in multiple places in the database. So if one instance of the data is changed, like the username of someone who commented, it is not always changed everywhere else. This is what eventually consistency refers to. Its the idea that the database will eventually normalize and create the changes everywhere. But within that time that it is not consistent, bad data can be returned and used by the program. While this is usually only a couple of milliseconds long it still can happen.Another reason these databases aren’t being adopted by companies is that they are already so invested in relational databases. So much money and time has been put into already existing systems that it is not worth the trouble. It would be quite difficult for an already established company, with a huge database, to switch over and move their data into a NoSQL database.So which should I use?Overall the choice of which kind of database relies on your needs. NoSQL databases are perfect for programs that require ever changing objects that have new attributes, or information added to it constantly. Companies such as Facebook, Netflix, Google, and Amazon all use NoSQL databases. In fact, many of these companies developed databases for themselves to use, and which they have made available to other companies. This is because these companies require the ability to alter their databases on a constant basis to keep their services running smoothly. Anytime Facebook adds a new feature they can simply add the data necessary to their database without having to take it offline or migrate. However if you know that your program will not need to scale, and will stay relatively small a relational database may be better for you. They are easier to work with because SQL is a much better querying language than any of the APIs used in NoSQL databases. Relational databases are also better in that that they have an agreed upon set of guidelines, which makes them consistent from company to company. So it is really up to you as a programmer to decide which database makes sense for your application.Sources :NoSQL Databases ExplainedSQL SERVER - Performance: Do-it-Yourself Caching with Memcached vs. Automated Caching with SafePeakNoSQL - Wikipedia
Node.js: What is it and how does it work? by on 08/29/2017
What is Node.js??If you have been coding in JavaScript, you have been using Node.js this whole time. Well, technically this isn’t true. When were coding in JavaScript we are using node and the npm library, but it’s not really Node.js. When we talk about Node.js, we are implying that we are using it as a way to create server and interact with our database on the backend. So far while doing projects I have only used JavaScript on the front end, in order to avoid refreshing the page, while still rendering new information on the page. With Node.js we are able to use JavaScript on the backend in order to handle requests to different urls. First I’ll talk a little about how it works and the difference between conventional servers, then give a brief overview of a test server, and finally discuss the pros and cons of a Node.js backend.How does it work?First of all just like other languages node comes with packages and modules. These are libraries of functions that we can import from npm (node package manager) into our code and utilize. If you have node installed on your computer, then you already have some basic modules installed. These are how we create a simple server, but I’ll get to that later.If you are familiar with JavaScript, then you know it is asynchronous and single threaded. The single thread is the event loop which is responsible for running all functions and requests. The asynchronous behavior is extremely important when using node, because it guarantees that the event loop is never blocked by a synchronous function.source: https://webapplog.com/event-loop/Even though there is only one event loop, when a request is made the loop passes the request to an asynchronous function which does the work. When this function is done and a response is returned, it can then be passed back to the event loop to be executed by the callback and sent to the user. If the functions were synchronous then the event loop would get locked up with one clients request and response, and all other clients would have to wait till that client was done. Because of the the asynchronous nature of JavaScript, the applications using node can handle many requests happening at the same time. This means that when programming in Node.js it is important to always keep in mind that the functions being written are not synchronous. It is also very important to catch errors on the server before it is passed back to the client. This prevents any errors from getting to the event loop which could crash the program and all clients would suffer.How does this affect users?What does this mean for the user? Is Node.js faster than other backends? The answer is that it isn’t necessarily faster. The main benefit of a Node.js server is that it can handle much more traffic than a conventional server. Other servers are multithreaded which means that each client, when connected to the website, gets their own thread. Each thread takes care of the requests made by their client. When you think about this it makes sense. Each thread takes care of one user and there is no crossover between clients, so technically they should affect each other. But this is not true. In reality there are limited threads available on a server. Therefore when there are more requests than there are threads, the client must wait for a thread to open up. This can lead to a slow down for other users, and can lead to server crashing if it gets too many requests at once.With Node.js, while there is only one thread (the event loop), when a request is made the event loop can pass it to another function to execute and then when it gets completed and sent back to the user the interaction ends. There doesn’t need to be an open connection (thread) at all times because it is event driven. This means that a Node.js server can accommodate more clients, with the same amount of memory, than another multithreaded server.What database should I be using?In my personal opinion, it makes sense to use a document based NoSQL database such as MongoDB. The reason being is that if you are using Node.js, then you are probably utilizing a full stack JavaScript framework. With a document based database it is extremely easy to read and write data because all the information can be stored and passed as JSON. From front to back there doesn’t have to be any translation, since the frontend and backend both talk JSON when using full stack JavaScript. However, it is possible use traditional SQL databases such as MySQL. The data being read or written just needs to be transformed so that it can be stored and rendered to the client. This is done with methods like .json() and JSON.stringify().Basic ServerSo how do we get started with Node.js? The simplest example is this done using the node http package provided on npm. With this module it is possible to create the most basic server. Here is an example that would start a server on localhost:8080, and would display the message “My first Node.js server!”var http = require('http');http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.end('My first Node.js server!');}).listen(8080);This is very basic and in order to get more useful routing, Node.js frameworks are used. Express.js is the most popular framework used today because it is easy to create RESTful routes, and provide access to the database through client requests.What are the downsides?Because it is single threaded if your app requires a lot of computation or a lot of algorithms this can slow down the event loop. This means that all clients could experience delays because they are all waiting for the same computation to finish and free up the event loop. Another problem that can occur is callback hell. This happens when there are so many callbacks nested within each other that the code becomes unreadable or hard to maintain. This can be avoided but it takes practice to learn how to think in this asynchronous way. Another problem is with the modules available from npm. Since Node.js is relatively new packages and modules are constantly being created or updated, therefore its hard to have consistency between different websites and apps. For example it would be much easier for a Ruby and Rails programmer to transition from different jobs and different projects because it stresses convention over configuration. There is no conventions for apps or websites written with Node.js. Accessing the database can also become a problem. This is because the database can only handle so many requests at once. This can be fixed if a queue system is used for database queries.When should I use it/learn it?It is best used for websites that have open connections such as chat or streaming. It is much easier to integrate websockets and realtime streaming into Node.js, because it was kept in mind when it was designed. It is also good when you know that your website or app will have a lot of client traffic, because it can handle many more connections with less memory used on the server. Overall I think it is good to at least have a basic understanding of how Node.js is used in a fullstack program. While it is the “hot” thing now to create MEAN stack applications or websites, there are many companies that use other frameworks and other backend languages such as PHP and Ruby. If you’re planning to work in start-ups, then I would highly recommend getting a firm grasp on Node.js. Personally I’m gonna try to build some simple projects just to get used to the basic concepts of this backend. And I suggest you do to!Sources:Node.js HTTP ModuleWhy The Hell Would I Use Node.js? A Case-by-Case TutorialThe Good and the Bad of Node.js Web App DevelopmentNode.js - Wikipedia
Higher Order Components by on 09/13/2017
react-usa-map: A package for customizing the USA map without D3 by on 10/09/2017
TL;DR; I created a React package called react-usa-map It displays the USA map with the states including DC / Hawaii / Alaska It’s MIT license Install instructions on The package section of this post.
Pry Me a River by on 06/15/2017
What I Learned from Rails Project Week by on 07/31/2017
Things move pretty quickly here at Flatiron School. After just a few weeks here, we were tasked with building our very own Rails apps! My team and I got super into our Rails project (an app we called Equidestined, which helps people find a midpoint between their locations and calls on the Yelp API to provide a list of venues where they can potentially meet up), and we’re hoping to build it out into JavaScript and Node later on.In the meantime, here are a few things I learned from the experience:Finding the midpoint between two or more geographic locations is no simple task! It’s been a subject of intense debate in many circles. Do you represent your map in two dimensions and accept a bit of distortion, or account for the curvature of the earth? If you opt for the latter, be forewarned that you’ll either need to rely on others’ research or reacquaint yourself with math concepts you may not have used since high school or college, and that there isn’t one accepted “best” way to do this.APIs are a great resource, but they can also be an enormous headache. OAuth2 was tricky to navigate for a beginner like me, but my hours of Googling (interspersed with a bit of “staring and despairing”) paid off in the end. Also, even a well-designed API has its quirks, and it can be surprisingly hard to obtain what seems like simple information. The Yelp API stores tons of detail on the “subtypes” of its businesses, but if you just want to know the basics, like “is this a restaurant, a bar, or a park?”… that’s not as easy as you might think.Likewise, Ruby gems can be a lifesaver, but they’re only as good as the time and effort that was put into their design and documentation. The geocoder gem saved my project team many hours of work (even though there’s so much to it that we probably could’ve spent the entire project week learning how to use all the options this powerful set of tools afforded us. On the other hand, our experience with the gmaps4rails gem was less positive. It does allow coders who don’t yet understand JavaScript to add dynamic maps to their Rails apps, but in the end, I felt the effort we put into making it work might have been better spent on learning basic JavaScript and calling the Google Maps API ourselves.Finding the right tools and tutorials can save you hours of flailing. We were so proud of all the things we managed to do with our site, only to find out we could’ve accomplished many of our goals much more quickly and cleanly if we’d understood JavaScript. Always be open to learning new things, no matter how much you think you know (not that any of us think we know that much yet)!There were many other lessons I took from our Rails project, but I’m currently in the thick of JavaScript project week now, so I’ll save those for another post. I can’t wait to share all I learn this week with everyone!Originally published at sarahgevans.wordpress.com on July 31, 2017.
The Real MVP is…. by on 08/30/2017
As I begin my final project here at Flatiron School, I find myself reflecting on everything I’ve learned thus far and how I can put it to good use, either in the last three weeks of the immersive program or in the rest of my (hopefully long!) journey as a coder.I’ve learned a lot — more than I ever believed I could in two and a half months. In the first few weeks, I got over my obsession with making all my Ruby methods a single line. I conquered my fear of hashes and learned to map, filter, and inject like a pro. I discovered that Rails apps are both much easier and much harder to create than I’d previously imagined; you can spin one up in an hour or two, but that doesn’t mean it’ll be easy to troubleshoot it a month later.Time sped up after that. In our JavaScript module, I learned that feeling totally overwhelmed as I took on new information didn’t mean I wouldn’t be able to use it later when I needed to. My JQuery skills could use some polishing, but I picked up a lot more “vanilla” JavaScript than I’d given myself credit for, and it’s stood me in good stead since then.After clearing that hurdle, I went into our React lessons with renewed confidence. React is great, although there’s a lot to it, and I picked up tons of exciting skills on my project that round. I can now make a single-page app with relative ease, thanks to React Router. Not only that, but I’m well-versed in pulling data from APIs, and reasonably skilled at building a Rails API of my own. My styling abilities, while still in their infancy, are coming along slowly but surely.I can make dynamic web apps, and that is awesome!One of my biggest takeaways from React project week, though, was… humility. I had the opportunity to partner with one of the strongest coders in my group. We were excited about our concept and spent hours planning out our models and components, drawing up a site diagram and a plan of action — all of which fell apart as we realized we’d bitten off far too much project to chew in the four days we had.We came out of the experience relatively unscathed, with an app that (mostly) worked even if it only did — if I’m being generous — about a fourth of what we’d hoped it would. Which brings me to the other huge lesson I learned that week, one I hope will stand me in good stead as I push onward into the adventure of module five.When you plan your MVP (minimum viable project), take what you think you can do in the time you’ve been allotted, and cut it in thirds. Keep one and figure out how to make it into something solid. The other two thirds are gravy (or “stretch goals”).Wish me luck, I’m off to the (code) races again…Originally published at sarahgevans.wordpress.com on August 30, 2017.
It’s week 11 of my immersive program at the Flatiron School, and at this point in the program, a lot of my learning is coming from building out my own simple apps for practice — which right now, as we’re in the midst of learning React, means lots of React apps.Another super basic app I built for practice- every time you click “Get a new dad joke!”, a new joke is rendered on the pageI spend a day or so building an app that lets you search the New York Times API, then add articles to a reading list that’s persisted in a Ruby on Rails backend database (so when you refresh the page/close the tab and come back, your articles are still on your list). The New York Times is a really easy API to work with, but still took a bit of setup. I won’t get into the connection with a Rails backend in this post, but hopefully this will make setting up the front end API calls clear for anyone new to React.The “all articles” screen, which fetches the most recent articles from the New York Times and renders them on the page — or, if you input a search term, the 10 articles matching that search term.1. Generate an API keyThe Times API doesn’t require much to access their content, just requesting an API key from their website for the specific part of their API you want (I used the article search). Probably the most challenging part is realizing where to put the API key in your URL when making a request to the API! The base format (for the 10 most recent articles in the API) is: “https://api.nytimes.com/svc/search/v2/articlesearch.json?&api-key={your API key here}”. Next, try making a request to the API (I like using Postman initially) and see what the response looks like.So here you can see it comes back as an object, and the information you want (about the specific article) is all under “docs”. When processing the information you get back, this is really important so you assign the headline to “json.docs.headline.main” (for example), rather than to just “headline” (which doesn’t exist!). Make sure to check out what you get back from an API before trying to do any manipulation of the data — all APIs give back different things in different formats, so this is not a step to skip.2. Make your fetch request from the appTo make these apps, I’m using React — which means Javascript, which means fetch calls to the API. Luckily, this means it’s pretty straightforward!If you’re not familiar with fetch requests, it will make a request to a given URL, and then return an object, which is then parsed to JSON(JavaScript Object Notation), using .then to link together methods. After parsing the return to JavaScript, you can use this information to do whatever you want. What I chose to do for my app was to set the state (data attached to a component that is changeable) of a component that contains all of the articles, so that the array of articles can be passed around and rendered by other components.fetchArticles=()=>{ let currentState = this.state.articles fetch(`https://api.nytimes.com/svc/search/v2/articlesearch.json?&api-key=a69e1cdbb16b4f23841c8f01be77f31a`) .then((res) => res.json()) .then((json) => this.setState({articles: [...currentState, json.response.docs})) }In the above code, I retrieve the current articles from the state (I also have a function that allows for loading additional articles, which is why I grab the articles — so they’re not overwritten). I make my API call to the NY Times API, parse the response to JSON with res.json(), and then use that json to add the articles (which are all nested within response -> docs) to the articles already on the page. React automatically re-renders a page when the state is changed, so the new articles are rendered on the page without having to make any additional function calls.To get a bit deeper into the weeds, I also call this method specifically in a componentDidMount() method, which means that every time the page first loads up, it calls for articles and renders them on the page. I also call it when a button is clicked to load more articles, and I call a slightly different method when a user searches the Times. This way, the page isn’t empty when it’s first rendered, and the API doesn’t get called (and therefore doesn’t return anything) until there’s a component on the page to hold onto the data.3. Rendering the ArticlesTo get a little more specific, that call above doesn’t itself make the articles appear on the page. To do that, I pass the information down from the top level component (the Article Container), to the Article List. The Article List takes all of the articles from the Container (passed as props — or unchangeable data that comes from the parent object), and maps over them to create each Article.But that’s still not enough! The Article List itself isn’t responsible for the Articles generating HTML on the page. Instead, the Article List passes props down to an Article component, which renders each Article as a card with a headline, URL, and snippet. Seems confusing…but that’s React. Each component renders one thing, and it’s important to keep the responsibility of each component in mind while writing your code. Once you get used to it, it’s a lot easier to keep track of where methods are and where they’re coming from in an app.https://medium.com/media/57a97dc98947aab1b28f5fddff7d7031/hrefYou can see this component is just a “dumb component”, and is strictly presentational — it’s not responsible for any methods or altering any data, it just takes what it’s given and renders HTML. This HTML is passed up to the Article List, which collects the HTML for each article and wraps it up into one div, then passes it up to Article Container, which renders the list of articles! Success!Now you have a basic app, built in React, that lets you load up articles from the New York Times API (or, with slight variations…any app with an API key!). Have fun!
An Intro to Binary Search Trees by on 09/20/2017
Navigate your Way Around with the Uber API by on 09/17/2017
I had the chance to browse through a few of the accessible public APIs. One that I particularly spent a lot of time on was the Uber API. In the near future, I hope to build a web application using its API. https://developer.uber.com/ provides a lot of documentation on its API. More the better, right? Yes, the docs were immensely helpful, but at the same time I spent the majority of Sunday looking through the documents being completely lost. This will not be a complete tutorial in regards to the Uber API, but I want to take the time to explain some of the approaches I took at deciphering the documents, and a little bit into as to how I would be utilizing it in my application.To start off, Uber provides five different features you could integrate into your application with its API:Ride requestsTrip experiencesDriversDeliveriesUber for businessThe ride requests deal with adding the feature to your application to get price estimates, search nearby drivers, make ride requests, and any other ones that are related to rides. Its main purpose on your application is to request rides on behalf of the user. The trip experiences offer information about the current ride that the user is a part of, such as the time remaining in the ride, the current location of the ride, pickup and destination location, etc. You can read more about them here: https://developer.uber.com/docs and choose which API you can use.First off, you have to register your application to get all the information you need such as the Server Token, Client ID, Client Secret, just to name a few. The Server token acts as the API key you need to make various requests, and the Client ID and Client Secret are essentially the user’s username and password.The name is going to be the name of your application, and you can also write a brief description of what your application is about. After you register your new application, you will get prompted to the dashboard where you will see the following:Generally, the idea is to keep the Client ID, Client Secret, and the Server Token private and cryptic. Whatever approach you are taking with your web application, start the fetches first with hardcoding if you want/have to, but by the time of the launch of your application, no one but the developers should know these authentication information. For my case, I will be using JWT and tokens in combination with the Ruby Rails backend to store these bits.The scope ultimately depends on the functionality of your application. The scope is going to prompt the users to either authorize or deny your application from granting access to their information including but not limited to the history of their trips, saved addresses, profile — names, email, photos, receipts, etc. (a popup to authorize before a user uses your application). One thing that I noticed is that Uber like any other companies is also very protective of its clients’ personal information. For example, if your application is not in charge of actually requesting rides but still wants to track a particular Uber user, it will be practically impossible. A third party cannot gain access into the users’ profiles.You can generate a new access token for Auth and use the sandbox that Uber provides to make various requests with the API. Sandbox is at your service with no charge so that until the point of which your product launches, you can test all endpoints without making actual Uber requests; the sandbox serves as a testing environment. If you use an application like Postman to view the API response, you can directly request and an access token through Postman via OAuth2.0 and filling out the necessary fields like the picture below. This is just a different way to create an access token.With this setup, it still honestly took me a long time to get a response back from the fetch. Ultimately there was an issue with cors and how I set up my fetch, but it was a simple one and the API is actually manageable! Below is a simple ‘GET’ sandbox request example to ‘/products’ get a list all cars (products) available to the user using the user’s current location given in latitude and longitude. This is also where the access token comes into place as the params.{ "products": [ { "capacity": 2, "product_id": "3145c334-25c6-462d-a2f5-70c38a165746", "price_details": { "service_fees": [ { "fee": 2.05, "name": "Booking fee" } ], "cost_per_minute": 0.15, "distance_unit": "mile", "minimum": null, "cost_per_distance": 0.93, "base": 1.05, "cancellation_fee": 5, "currency_code": "USD" }, "image": "http://d1a3f4spazzrp4.cloudfront.net/car-types/mono/mono-uberx.png", "cash_enabled": false, "shared": true, "short_description": "POOL", "display_name": "uberPOOL", "product_group": "rideshare", "description": "Share the ride, share the cost" }, { "capacity": 4, "product_id": "1b64bf82-a0ba-4b0f-be32-df8d05481d7e", "price_details": { "service_fees": [ { "fee": 2.05, "name": "Booking fee" } ], "cost_per_minute": 0.15, "distance_unit": "mile", "minimum": 7, "cost_per_distance": 0.93, "base": 1.05, "cancellation_fee": 5, "currency_code": "USD" }, "image": "http://d1a3f4spazzrp4.cloudfront.net/car-types/mono/mono-uberx.png", "cash_enabled": false, "shared": false, "short_description": "uberX", "display_name": "uberX", "product_group": "uberx", "description": "the low-cost uber" }, { "capacity": 4, "product_id": "bbec56dc-1c72-44ea-ba64-fe51bf392c09", "price_details": { "service_fees": [ { "fee": 2.05, "name": "Booking fee" } ], "cost_per_minute": 0.15, "distance_unit": "mile", "minimum": 7, "cost_per_distance": 0.93, "base": 1.05, "cancellation_fee": 5, "currency_code": "USD" }, "image": "http://d1a3f4spazzrp4.cloudfront.net/car-types/mono/mono-uberx.png", "cash_enabled": false, "shared": false, "short_description": "uberX", "display_name": "uberX to NYC", "product_group": "uberx", "description": "the low-cost uber" }, { "capacity": 6, "product_id": "a539ddeb-a2e4-43b5-9c51-3a53e0c74c0c", "price_details": { "service_fees": [ { "fee": 2.3, "name": "Booking fee" } ], "cost_per_minute": 0.18, "distance_unit": "mile", "minimum": 8.3, "cost_per_distance": 1.63, "base": 1.5, "cancellation_fee": 5, "currency_code": "USD" }, "image": "http://d1a3f4spazzrp4.cloudfront.net/car-types/mono/mono-uberxl2.png", "cash_enabled": false, "shared": false, "short_description": "uberXL", "display_name": "uberXL", "product_group": "uberxl", "description": "low-cost rides for large groups" }, { "capacity": 4, "product_id": "15a0b7b9-36b5-4451-8759-0c1ef4b3b7e1", "price_details": { "service_fees": [], "cost_per_minute": 0.65, "distance_unit": "mile", "minimum": 15, "cost_per_distance": 3.81, "base": 7, "cancellation_fee": 10, "currency_code": "USD" }, "image": "http://d1a3f4spazzrp4.cloudfront.net/car-types/mono/mono-black.png", "cash_enabled": false, "shared": false, "short_description": "BLACK CAR", "display_name": "uberBLACK", "product_group": "uberblack", "description": "The original Uber" }, { "capacity": 6, "product_id": "320eb522-035a-4e7f-adc6-71af8e2404bc", "price_details": { "service_fees": [], "cost_per_minute": 0.8, "distance_unit": "mile", "minimum": 25, "cost_per_distance": 4.56, "base": 14, "cancellation_fee": 10, "currency_code": "USD" }, "image": "http://d1a3f4spazzrp4.cloudfront.net/car-types/mono/mono-suv.png", "cash_enabled": false, "shared": false, "short_description": "SUV", "display_name": "UberSUV", "product_group": "suv", "description": "Room for everyone" } ]}Using the JSON response and ReactJS, I rendered onto the page the kinds of products available, their passenger capacities, and images.The next steps to request a ride would be: 1. Use the product_id provided in the response as a params in combination with the origin and destination locations in longitude and latitude to make a ‘POST’ request to ‘/requests/estimate’ to get an upfront fare. 2. Use the fare_id provided in the response and the origin and destination coordinates to make a ‘POST’ request to ‘/requests’ and finally make the ride request.Resources:Developers | Uber
The way that we write code and build new technology has a direct correlation with our environmental impact. I’m not just talking about the rapid rate at which the West is generating e-waste (which is the fastest growing waste stream in the industrialized world) and exporting it overseas to be “recycled”, which is releasing huge quantities of toxins into the air, soil, and water. Right here at home, the EPA reports that approximately 29% of carbon emissions in the United States are tied to electricity production. That’s more than transportation, which clocks in at around 27%. So while we are all dreaming of a technological future with bitcoin and futuristic cars, we need to pause and think about the fact that we’re not thinking about impact nearly as much as we should.Bitcoin and Ethereum are using tons of electricity (literally)Digiconomist, which “is a platform that provides in-depth analysis, opinions and discussions with regard to Bitcoin and other cryptocurrencies”, has running indices that project the amount of annual electricity that cryptocurrencies are using.https://digiconomist.net/bitcoin-energy-consumptionBitcoin and Ethereum are estimated to use more electricity than Ecuador each year.That’s an astonishing statistic. To connect it to something a little closer to home, and see it on a much smaller scale:https://digiconomist.net/bitcoin-energy-consumptionONE singular standard bitcoin transaction could power more than 6 U.S. households for one day.Frankly, that is ridiculous. Both Bitcoin and Ethereum claim to be moving towards more efficient algorithms, which in turn would mean less energy consumption, but Bloomberg first published an article about the environmental consequences of bitcoin mining back in 2013 and things have gotten worse, not better as cryptocurrency continues to grow.What does that mean for software developers and engineers?Some of us might get jobs directly connected to this, working for a company that uses the blockchain. But, there are tens of thousands of software engineers who don’t work with codebases that run similar transactions that take so much energy. That doesn’t mean that they are not at all responsible.During our time learning code, we’ve been writing very small projects. Some of us might deploy them to Heroku, but they aren’t going to production and they probably won’t be used by very many people. Our code was written for the sake of learning. It has been messy and often inefficient, and while we’re trying to do better, that’s pretty okay for now.We’ve learned a bit about writing more efficient code, too, and the importance of refactoring. Mainly, we’ve considered the importance of refactoring and efficiency in terms of the developers who will come after us and user experience. I think it’s important to talk about in terms of environmental impact, too.Let’s (finally) look at a tiny code snippet to see this in action and think about economies of scale.Using Benchmark, we can compare two very simple Ruby methods and how long they take (in real time!) to run.https://dzone.com/articles/how-do-i-benchmark-ruby-codewith the resultshttps://dzone.com/articles/how-do-i-benchmark-ruby-codeA one tenth of a second difference in code, when you are just testing a project that is mostly for learning, makes little difference. Honestly, I mostly feel accomplished if I 1) finish a project or 2) it doesn’t break in the middle of a presentation. Rarely do both happen at the same time. Small victories!But going forward, when I could potentially be working on a codebase that has thousands or even millions (!!!) of users, milliseconds add up. The energy those milliseconds of processing use adds up.Now let’s say the codebase where this method is used has 500,000 lines of code and 20,000 daily users.If each of those users interacts with the program in a way that triggers that method, it’s more than 45 minutes, cumulatively, of extra time and energy just to use that method!To be clear, that processing time is not the same as a kilowatt hour. Estimates for how much power programs consume varies widely. Some sources claim laptops use less than a hundred kWh per year. Others claim average use will rack up that much consumption in a month or two. It’s also difficult for developers to test how much energy their applications are using. But at the end of the day, code uses energy. Inefficient code uses more energy.So, what can we do?Bluntly, if you care about sustainability and global warming, you should probably start writing cleaner code.Do a lot of testing. Write faster methods. Maybe it doesn’t seem to matter at first if your project is only going to be used by 100 people. Do it anyway. Get in the habit. Plus, things add up.Delete code you don’t need. You don’t need to be using server space storing stuff you literally never use. Servers use an enormous amount of energy.US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, the most recent year examined, representing 2 percent of the country’s total energy consumption, according to the study. That’s equivalent to the amount consumed by about 6.4 million average American homes that year. (http://www.datacenterknowledge.com/archives/2016/06/27/heres-how-much-energy-all-us-data-centers-consume)When you’re not working, occasionally read a book or go outside instead of spending 3 hours wasting time on the internet. (We all do it!)Bring it up with your coworkers, your supervisors, your project managers. You might get the reputation for being the crazy environmentalist. That is okay. Someone has to do it.RESOURCESA Single Bitcoin Transaction Takes Thousands of Times More Energy Than a Credit Card SwipeBitcoin Energy Consumption Index - DigiconomistHow Much Energy Does Bitcoin Use? A Lot It Turns Out.http://2013.ict4s.org/wp-content/uploads/A4-3-Sedef-A.KOCAK-The-Impact-of-Improving-Software-Functionality-on-Environmental-Sustainability.pdf
Objectives of This PostProvide a real life example of how python-pptx is useful.Explain a few of the features of python-pptx.Share aloud my programmatic thinking in hopes of providing readers an insight into how to approach similar problems.(Note: This walk through will not really address the aesthetic/formatting aspect of PowerPoint. As far as I can tell after having gone through much of the documentation, python-pptx is lacking a bit in this area and was not built to assist with formatting beyond basic slide structure.)A Quick IntroSteve Canny’s python-pptx is a great library for getting started using Python to create dynamic PowerPoint slides. PowerPoint presentations are often short, sweet, and full of pictures and other media. But, sometimes, well, they aren’t, and when that is the case, having a tool to make your life easier is better than having to slog through creating lots of word/data intensive slides. Before following along this more extensive tutorial, feel free to check out this video tutorial by David Cameron in order to get a very basic understanding of how python-pptx works. (Note: I will only be showing the most basic features of the library. It does a lot more than I will be demonstrating.)The (Not So) Hypothetical ProblemIf your husband/wife is a high school administrator (like mine), perhaps s/he wants to regularly create a PowerPoint presentation that can be used to shout out students who have high homework averages. This is a great idea but a pain to implement each week unless no one in the school is doing their homework. The more kids who have high homework averages, the more names have to go into that PowerPoint, and the list is likely to vary each week. And, if your school is nearly 1,000 students large (like my wife’s), that could mean A LOT of names to type in each week. Luckily, most grade books are managed online now, so getting the data into a manipulatable form is as easy as downloading it from the school’s chosen web app. Also luckily, a programming mind, a quick Google, and 45 minutes of trial and error can solve the other half of the problem, getting that data into a PowerPoint…The Basic Structure of python-pptxAfter installing the package, using python-pptx is the same as any other library. At the top of the file, import the dependencies you will need:Besides the initial import, dependencies are project specific. We need an extra python-pptx utility and the ability to open and parse a CSV, so that’s what we are importing, but you will almost surely need other/different libraries/tools for your project(s).Python-pptx then has a basic set up in order to get started. The following two lines are straight from the official documentation:In this code, we are shown how to create a new presentation object and choose a layout from the eight default master slide layouts native to PowerPoint. (They are accessible in order of their appearance in the program, with the first one, index[0], being the “title slide” layout.) Since our hypothetical problem is a bit different, our set up looks like this:Here, we have used our imported csv library to open and read our file of student data. Since our hypothetical problem includes needing to make this presentation each week, we don’t want to have to format the presentation every time, too. So, instead of creating a brand new presentation each time, we are opening our existing base file on line 6. This is so that we can make some formatting decisions — like font and color — and add our data slides to that master format. Next, we choose the blank slide layout, located at index[6] of the master slide formats, and we create an initial slide with which to work using a series of chained method calls as per python-pptx’s documentation. Now, we are officially ready to start tackling the actual problem, getting hundreds of kids’ names into well-structured slides with the click of a button…Thinking About This Problem Like a ProgrammerDepending on your project, you will need to determine at least two basic details of your desired outcome:How many names/objects do you want to fit on each slide?How do you want them organized?In the case of our (not so) hypothetical problem, a little trial and error on my part determined that 3 rows of 14 names looks best-ish. Programmatically, this is a pretty easy problem to solve. We will need to set a few variables in order to keep track of our iterations and make adjustments at intervals based on our 3-rows-of-14-names constraint:r is for ‘row,’ as in the data row of the CSV file, and placeholder is the name that python-pptx uses for objects that hold data on a slide, so it made sense to call our objects that, too. We will increment these as needed in order to keep track of our count and make adjustments for rows and columns.Next, we need a way to adjust the placement of our rows on the slide. Each name in our rows needs to be immediately below the previous, and each new row needs to be shifted to the right and start back at the top. A browsing of the documentation shows us that python-pptx provides us with a solution:As you can see above, the add_textbox method can take in positional params, which, in this case, are all set to an instance of 1 Inch in the documentation example. This is great news for our project, but requires some customization:These starting numbers (once later passed into Inches()) will start our initial placement in the top left corner of the slide, right where we want to start. Now, for the for loop. First, we open up the loop and skip the first row to account for our data headers:Then, we have set up our incrementation check for our rows and columns:In the above code, when our slide is totally full, we add a new slide, reset our placeholder counter, and reset our left and top values to take us back to the top left corner of the new slide. If our first or second row is full, we increment our left variable by 3 (so that the new row is indented sufficiently) and reset our top variable so that the new row starts at the top of the slide. Now, we can fill our slide with names!Our data is structured so that column two is the students’ first names and column five is the homework averages. So, we assign these to variables using the csv library syntax and the proper indexes (row indexes start at 0, just like most things in programming). Then, we create a new textbox using our imported Inches utility and our preset variables. Next, we fill the new textbox with text using our row data that has been set to variables. Then, we increment our placeholder counter, increase our top variable so that our next placeholder will be below the previous one, and (outside of the if statement but still inside the loop) increment our row counter to move to the next row of data. Finally, once our loop is done and all of our names are in, we save our presentation:The End ResultThree rows of 14 names, just like we wanted!We have 6 slides of more than 250 names next to their homework averages, all with the click of a button! It’s not formatted beautifully (yet), but the hard part is done, and we have hundreds of names programmatically inserted into a PowerPoint!Final Thoughts and Take AwaysThis project turned out to be a lot simpler than I imagined. When my wife first asked me to do this for her, I thought for sure that it would be impossible, but once I got going, it took me no time. Here are some final thoughts about this project and projects like this in general:If you have an idea, ignore the instinct to think it impossible. It’s probably easier than you think, and if not, Google and YouTube are invaluable resources.Basic algorithmic thinking is very useful and will never go out of style no matter how fancy of a programmer you become.Breaking down a larger problem into its component parts makes solving the larger problem fantastically easier.Python is a smooth, syntactically sweet language that is fun to code with.
Finding our ethical rhythm with algorithms by on 09/17/2017
A reminder of the moral obligations of developers and the ethics of algorithmshttps://medium.com/media/d3423360ff3b77833af2a82cc41c71b3/hrefWe live in the age of the algorithm. For most people — especially those like myself that rely heavily on the internet for most of their everyday dealings — it wouldn’t be an overstatement to say that almost all aspects of our lives are significantly affected by algorithms.Let’s start with two examples which have been catching the headlines lately. First, Facebook and their personalized newsfeed inadvertently creating political echo chambers. Which many would argue has acted as a catalyst to exacerbate the political divisiveness we see today, ultimately swaying (to some degree) the result of the 2016 election. Next, the tale of United and the doctor from Kentucky, Dr. David Dao, bloodied and dragged off the plane for refusing to leave his seat on his overbooked flight. How did United select the passengers that they would force off? Algorithms. The algorithm most likely determined that Dr. Dao, his wife, and two others were the least valuable customers to United’s business.It’s likely that many of us do not realize or give much thought to the true pervasiveness of algorithms and the extensive repercussions of their use and design. Algorithms have quietly seeped into our lives, impacting the outcome of the most weighty, important events such as getting a job or taking out a loan, to the most mundane and trivial occurrences, like reading the news, buying shampoo, or the attractiveness of your next right swipe. Everything has now become personalized, and in most cases, we have no way of knowing how. Or for that matter, by whom. It’s almost like having a personal stylist who you’ve never met before but has heard about you deciding what you should wear. You may be thinking, what do you mean, by whom.. the algorithm, that’s who! But the truth is that algorithms aren’t as objective as we’d like to make them out to be. And just because they are rooted in mathematics does not make them devoid of meeting ethical standards. The way they are designed and the dataset utilized is simply an extension of the creator of the algorithm, and thus, carry the predilections and intentions of the said, inherently subjective and fallible human who it was made by.To further illustrate my point, the criminal justice system in the US currently uses algorithms to set bail, measure recidivism, and help guide judges when sentencing the culprit— all life-altering metrics and decisions for those deemed guilty of committing a crime. Algorithms have slowly replaced the judgement of individual judges, with the intention of normalizing the results to ensure fairness. Some judges are generally stricter than others, and some judges have a history of penalizing some races harsher than other races. But even as these supposedly objective algorithms are put into place, we still see that some groups receive harsher punishment than other racial groups, signifying that we still haven’t been able to completely eliminate racism from the sentencing process. In many cases, algorithms are seen as “fair” because they do not account for race, but make calculations based on data which corresponds with race (like where the person lives), and therefore end up being proxies to race. We’re taking strides towards the right direction, but we still have a ways to go and must recognize the fallibility of algorithms.Now, this all sounds negative but I personally believe that as an aggregate, algorithms do much more good than harm, and if used correctly, are capable of being invaluably beneficial to society. But, I also believe that developers have a moral obligation to be mindful and cognizant of the far-reaching repercussions of their creations and algorithms. It is easy to forget the underlying ethics and fairness of an algorithm’s output because the calculation itself is cold, hard math, but we must not forget that mathematics is a language which can easily obfuscate the truth. Ultimately, developers are often the first line of defense when it comes to preventing the tyranny of the algorithm, and they must be vigilant when creating such powerful tools.Sources:The Ethics of AlgorithmsThe Age of the Algorithm - 99% InvisibleHow algorithms are used to set bailAlgorithms in the Criminal Justice System | Berkman Klein Center
Learn to Program, not a Programming Language by on 07/10/2017
The hottest framework doesn’t matter, fundamentals doLearning to program is hard. Not only do you have to learn it, but you have to decide what to learn. First you have to know where you want to fit in. Do you want to focus on frontend or backend? Are you interested in devops, game development, data analysis, mobile development, web development — which is right for you?Next you might wonder what language you should learn. Java makes so much money why not that? Javascript seems hot, maybe that. But then which framework should I learn? React is so popular, Angular is backed by Google, but Vue is rising. Then you have to learn how to manage the states of your project, is Redux the answer or is Flux? My time is valuable, I better choose right.None of it matters, you don’t know how to program.During my time learning programming on my own, I spent countless hours looking up tutorials and videos trying to learn whatever seemed popular and in fashion in the development world at the time. At my job I worked in Python with the Django framework, then my company switched to PHP a few months ago. On my own time I was learning basic Javascript along with Node.js and React, then switched to Vue. I spent hours working but wasn’t learning. I could make a WordPress site go or a To-Do app in whatever language I was working on, but I didn’t know how to program. I didn’t even know how to learn programming.Writing code is a skill that needs to be developed like any other craft. Start simple, and do it. Watching other people writing code isn’t enough. Adjusting other peoples’ code isn’t enough. You have to write your own code, that solves your own logistical problems. They can be made up problems, but the practice of problem solving is the fundamental basis of coding.Steph Curry didn’t perfect his jumper watching his dad shoot, and he didn’t start from the 3-point line. Aspiring coders need to start basic, and practice typing code that solves problems.credit: Reddit user 36DDProgramming knowledge is transferrable. Software changes. If you’re a strong programmer you can easily adapt to a change in your development stack. If you spent your time learning only one language when switch jobs or your company switches tech, you’re left in the dust.Syntax is easy to learn once you have a foundation.Ruby:array = ["Chef", "Curry", "with", "the", "pot", "boy"]array.each {|item| puts item}Javascript:array = ["Chef", "Curry", "with", "the", "pot", "boy"]array.forEach(function(item){console.log(item)})Not so different.Knowledge of data types, and object oriented principles and design patterns don’t change, and are used no matter what you do in the field. Don’t ever delay learning in an effort to find the right thing to learn.Versatility is ValueAs a developer, every language you know opens up more job opportunities and creates value for employers. If that language you spent hundreds of hours learning and working in is no longer popular, it doesn’t mean there are no longer companies using it. The knowledge you have as a developer is still a valuable tool in your toolbox and more than just a line on a resume.Being teachable and flexible is a more valuable commodity to a company looking for developers. You’ll constantly be learning throughout any career as a developer, so get started.Knowing where to start is the hardest part. Just know it doesn’t really matter that much. Sometimes the overload of learning resources can make things harder, especially if you’re self-teaching. As long as your coding you’re leaning something valuable that will make you better in the long run. Sometimes you’ll learn a hundred ways to not make a program. Picking the wrong language or framework to learn won’t be your mistake, not starting will be.Before you pick a language, learn to speak.
Up your Text Editing Game by on 07/25/2017
Because you can’t MichaelJordon functions.https://medium.com/media/5cf687c74774f68347d866c0d9c1d09a/hrefWithin the functional programming paradigm, our main goals broken down by Eric Elliot are:Pure functionsFunction compositionAvoid shared stateAvoid mutating stateAvoid side effectsThis post isn’t about making a case for functional programming or even an in-depth explanation of what it is. If you want to read more about it, check out this. For our purposes we just need to know the basics to figure out how function currying fits within the paradigm.What is it?In mathematics and computer science, currying is the technique of translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument. Currying is related to, but not the same as, partial application. — https://en.wikipedia.org/wiki/CurryingBasically we’re breaking down a function that takes many arguments into a chain of many functions that each take one argument that return a function. Your functions will go through your program and step-wise receive arguments until you reach your output.So lets start with a a simple function without using currying that is going to return a string based on the arguments we give it:https://medium.com/media/c2ca004d73de60ce21ff6ecaf7727c2a/hrefhttps://medium.com/media/dd17152c033fc86991af3c377f5f9eac/hrefEasy stuff.Everything works, but what if we wanted to use currying to break down the function into multiple functions each taking one argument.By returning a function until we reach the end of the chain, we get something like:https://medium.com/media/6af50d9d50715dbe5196cefb090c8ded/hrefhttps://medium.com/media/4f64a735fd72991566bfb0a155def6d2/hrefSame result, but notice how we have to execute the function different with 4 different function calls instead of 1 function with 4 arguments. Steph how do you feel about this version though…But we can clean this up using ES6 arrow functions with implicit returns.https://medium.com/media/ee1c8d4624fb4951b1b816979aca0245/hrefhttps://medium.com/media/f7d6058f3e955c177ec6ab1372deda2a/hrefMuch better. This simplified version is way less code and becomes much more readable to anyone looking at the function.Why Use Currying?Going back to functional programming, we want our functions to be small and useable to enable them to be easily used in more flexible ways. This can help us to avoid repeating parts of our code and make it more readable in the process. One of our foundations for the functional programming paradigm is ‘functional composition’. I’ll once again let someone smarter than me explain exactly what that means. Eric Elliot explains it saying:“Function composition is the process of combining two or more functions to produce a new function. Composing functions together is like snapping together a series of pipes for our data to flow through.” — Eric Elliot (https://medium.com/javascript-scene/master-the-javascript-interview-what-is-function-composition-20dfb109a1a0)Passing to a LibraryThe reality our our example is that we made a function that was likely easier to write without using currying. So instead of taking on the job of currying our functions, we can pass the ball off to a library. In this example I’m going to use lodash, but many libraries have a currying function or something similar. Ramda, underscore, function.js all have their own implementation and the concepts are used in most all modern frameworks like React, Angular and Vue.https://medium.com/media/99c3862a60b69093c48793c748733f82/hrefhttps://medium.com/media/84df19995195ebac0c3c651e17db8ebc/hrefBy using lodash’s _.curry function we create a function that returns a function that expects 1 argument, in this case, team we’re filtering by. So now instead of writing a filter function that takes in the object and the property we want to match, we can break the process down so that our playerTeam function we’re using in our filter only needs the argument of the team we’re matching. This simplifies our code and also creates some reusable functions that are more readable.Further ReadingA Beginner’s Guide to Currying in Functional JavaScript — M. David GreenCurrying in Javascript ES6 — Adam BeneMaster the JavaScript Interview: What is Functional Programming? — Eric ElliotMostly adequate guide to FP (in javascript)Youtube — Currying — Part 6 of Functional Programming in JavaScript by funfunfunctionStephen Curry WikipediaSpecial thanks to Stephen Curry for collaborating on this blog.