One thing that struck me as a junior developer when I began my career was that I didn't really understand what software developers did on a day-to-day basis. While many coding courses teach you about the hard skills you need to make websites and web applications, like building a page with HTML, styling it with CSS, adding functionality with JavaScript and so on and so forth, there often isn't a whole lot of information about what you'll be doing once you land your first job.
This realisation really caught up to me in my first job when I realised that I didn't really know where to start.
Frontend Development
Building basic site structures
For context, I was given a project to work on. It was new and I just had to build the HTML, CSS and JavaScript for a website. The HTML templates I built were eventually chunked up by a Typo3 integrator, someone who added modules into the CMS that could then be reused and populated with content. But how would I start?
The first thing in getting any project started is beginning a JavaScript project. I used npm
back then, which inevitably leads you to a package.json
file. What turned out to be much more complex than I thought was creating some sort of build process, where my Sass and jQuery code would go in the front, and compiled and uglified CSS and JavaScript would be spit out at the other end.
Now, I don't remember any more what that entailed, but I needed help setting it up.
Once that was done, I was onto the actual development of the project. Where to begin? I was faced with a bunch of pages in a design file and told to implement them. I decided to start with the header - a good choice then and a good choice now, with all other things being equal. The header goes on every page and contained the same content, so it was an obvious first choice. It took my quite a while (it was a 3-tiered header with complicated nesting), but I was proud when I finally finished.
The question then is: how to convey my beautifully-crafted header to the Typo3 integrator who was waiting to integrate it. This is where your classic version control system (VCS) comes into play. We used Git, but not GitHub - thinking about this now makes me shudder because GitHub is so clearly the most popular and best tool to use for remote version control (some will argue for GitLab and Bitbucket, but I disagree). So, I had to create a branch, commit my changes, publish the branch to the remote repository (the equivalent of GitHub) and link the branch to the task I was working on. Only then could my colleague have access to the code I'd written.
All of this now seems so familiar and easy to understand, but back then it was a foreign concept to me, not least of all because my colleagues were explaining this to me in German, literally a foreign language to me!
Working with data
Another part of frontend development is working with data given by "the backend". This could be any API, but is typically an API internal to the company that provides dynamic data to the browser, which then has to be handled by the frontend. One such example that I worked on in my first role was adding data to a Google Maps integration.
First off, you need to find a way to add the actual map to the page - Google Maps API provides some pretty nice utilities for doing this, and probably SDKs for working with React/Angular/Vue/etc. Back then, I was working just with plain old HTML and CSS, so I had to manually add an area for the map to appear, and then call the API to add the map component to the page.
I then used some of Google Maps' features to add points to the map, clustering them when several were within a certain area, and centering and zooming the map so that all points could be seen when it first loaded, and that they were as evenly distributed around the visible area as possible. What I had no concept of at that point was making API requests to fetch the data. The website didn't provide an endpoint to grab the map data from, and the backend developer on the project was also a junior, so perhaps the approach we made wasn't ideal. In it, he added all the map data as a string of JSON to a custom attribute in the <div>
that would contain the map. I had to convert the JSON into a data structure to work with in JavaScript. From there, I had a list of points for the map. Looking back, this approach was pretty horrible, and I still remember how much I struggled getting the data from the stringified JSON. I think this was probably my first encounter with JSON, and it was a valuable lesson in how to deal with it. I only wish I had asked for some help here, because it took a long time to understand what was going on.
Another useful takeway from this task, though, was also the value in using the Google Chrome console in the developer tools. As far as I know, all other browsers have an equivalent feature. Not just that, but also the power of mastering the DOM manipulation and querying API that browsers expose via JavaScript. Querying and finding the DOM nodes and their attributes in a live environment allowed me to play around and figure things out much more quickly than making a code change, saving it, reloading the browser and trying again. With better developer tools than back then, this feedback loop is better than it was, but having access to the DOM nodes right there in the browser gives the kind of immmediate feedback that make it much more comfortable to "hack together" a solution. A lot of what I've done as a developer both professionally and in my spare time has been assisted by using this sort of system.
Working with data in a "normal" way
As mentioned before, it's not so common to have the data that you need just lumped into some custom attribute on a <div>
tag. The "normal" way of fetching data is to use API requests. I'm a fan of the native fetch
function that is available in your browser, although it does have some drawbacks. Previously, it wasn't actually available in all browsers, meaning it wasn't really suitable for production applications. Nowadays, all major browsers have added support for it. Another downside is a rather simple set of features. With tools like axios
, you get a much richer set of features that are geared towards querying and updating data, and dealing with the responses.
Dealing with data quickly brings up the question of: what do I do when I need to request the same data again? Like if the user refreshes the page or navigates back to it shortly after navigating away. A simple, yet often dismissed, answer is to just make the request again. Many tools, especially in the React ecosystem, try to make an attempt of caching responses. In particular, react-query
prides itself on keeping track of all requests that were made so that duplicate requests aren't made, lightening the load on your API and improving the speed of your frontend. They even give you the chance to manually modify the data in the cache when updating, so that a refetch isn't necessary. I personally find a lot of these methods over-the-top. For a good number of applications, there isn't the demand, both in terms of performance and lightening the load on the backend, to justify overly complex caching. I won't reject it where I get it for free, like lately with react-query
, but I also don't spend much energy on optimising caching when I know it's unlikely to be an issue.
Full-Stack Development
While I've held a backend-only role in my career, the typical workday there was so far removed from the other positions I've held, that I won't bother describing it too much. Let's look at full-stack, which is more where my interest lies.
Full-stack is my preference because it gives you the power to build features end-to-end. What does this mean? It simply means that you get the requirement to create a particular feature and the power lies with you to make it all work, from start to finish, backend to frontend. Let's look at a pretty straight-forward and quite common example: adding a form to gather some data.
Starting with the API
I typically start with the backend of any such task, unless I have a really good reason to start elsewhere. This means adding an endpoint or multiple endpoints to my backend that will be used to create/update some data. I tend to follow REST when building an API, not only because it's well documented and discussed online, but also because Ruby on Rails favours REST APIs. It's possible to build a GraphQL API with some supplementary gems, but it just feels like going against the conventions of Ruby on Rails, which isn't usually easy to do.
Creating and testing an endpoint is quite easy with Ruby on Rails, you can generate a controller to contain the REST endpoints using the Rails CLI and from there it's a matter of ensuring that your endpoint is accepting the right parameters and creating/updating the data as required. At this point, there is no frontend to test the API out, so using a tool like cURL, a CLI that allows to make requests from your terminal is a good way to test your endpoint to make sure that you're returning the correst responses (HTTP status codes) and data. Another system I like to use is Test-Driven Development, or TDD. Using TDD you can make assertions about what you expect your endpoint to do under certain conditions. Forgot the authorization? It returns 401 Unauthorized. Didn't structure the data correctly? It returns 422 Unprocessable Entity. Sending data that already exists? It returns 409 Conflict. In this way you document your endpoint with tests, which in my view is much better than with actual written documentation. Why? Because if a test fails, it no longer accurately describes your code, but written documentation can easily become outdated without anyone noticing.
Using your new API endpoints
Once you have your API endpoint or endpoints ready to go, and have tested them properly, I like to write the most basic frontend code I can to use them. Starting with a prototype can make sure my backend tests accurately describe the conditions the endpoint will be used in. Imagine I'd mistyped the name of the header that is sent with my access token. Building a small prototype that uses the endpoint in a crude but accurate way will eliminate issues with the integration with the backend. After that, I like to test-drive my frontend code as well. Should the page redirect after the form is submitted? I can write a test for that. What elements appear on the form? I can write a test for those. Does an error message show up if something goes wrong with the update? I can write a test for that too. In this way I take all the guesswork out of the development phase. I know exactly what I need to develop because I write tests for it all before implementing it.
When things don't go to plan
It's easy to talk about all the tasks and processes of daily software development, but an aspect that you rarely foresee is when things don't happen as you expect. Debugging issues, whether it's previously written code or during the development of a new feature can be incredibly frustrating, but they are part and parcel of the life of a professional software engineer. Overcoming issues can sometimes feel like more art than science, and intuition can often play a role in the process.
I try to be as systematic as possible when it comes to overcoming problems. One downside the modern-day software development is the incredible number of tools and libraries that we're using that other people have written. Don't get me wrong, Open Source Software is miraculous and our lives would be worse without it, but dealing with others' code can make debugging much harder. Rails is a perfect example of this. There is a lot that Rails takes care of, which makes building things faster and easier. However, trying to understand that when things are breaking can be frustrating, as many parts of the request/response cycle is obfuscated to simplify the life of the developer using it. This obfuscation means you can sometimes spend longs periods of time digging into source code and understand why things aren't going as you imagine they should. This can be an enlightening experience, and I recommend reading source code as a way of improving your skills, but it's often at times when you just want to be getting on with what you're trying to do.
That said, it's important to try to find the cause of any issue you're facing. It could be that you don't understand fully a tool that you're using, it could be a bug with the tool itself, or a combination of the two. If in doubt, always assume that the mistake is yours because it most likely is. Check versions of the tools you're using. If one of the versions is brand new (days old), you might consider rolling back to a more stable version that has had more users and more feedback on it. Read stack traces. If you are getting an actual error, read the stack trace until you find the part of your code that is failing. This will either give you a place to debug your own code, or an entry point into the library that will be illuminating. Make sure to Google thoroughly. Googling might seem like a novice approach, but I don't know a single professional software developer that doesn't Google on a daily basis. Googling error codes, error messages, top parts of stack traces and a combination of the tools that you're using will typically throw up some relevant results. Referring to documentation is another first port of call when I'm using a new library and something isn't going right. I often skim documentation the first time around, so I can easily miss important details that can make a difference between the code working and it not working.
Administrative tasks
When working on a project, there is always a way to track and publish the progress you're making on a task/feature. Tools like JIRA, Linear, Trello and Asana are used extensively to keep track of progress in your software development team. It's another thing that I can be bad at, so I usually set myself reminders to update the tasks on these tools before the end of the day, so that others, particularly team leaders or product managers, can view the current status of a task for your and the rest of the team.