I decided to read through all the essays Paul Graham has written. The only issue is the format of the essays, which is web-based. I figured it will take me a while, and I prefer to read in ePub format since it is neatly laid out on my phone and remembers where I left off. Fortunately, I found a GitHub project where you can get Paul’s up-to-date essays in different formats and even find the code for a DIY solution.
Category: Software
Mac mini server 2011 – the last upgrade
Running a home server is kind of a hobby with some benefits. I’ve been doing it for almost two decades, starting out with a Windows machine put together from old parts. Then I upgraded, upgraded some more, and at some point, I ended up with a Mac mini G4 (ah, good times) and finally a Mac mini server 2011, which I purchased around 2014-2015.
I ended up with Apple equipment because it was a good compromise between money, my needs, and my skills. At the time, I was pretty fed up with Windows and wanted to use Linux but fell short on skills. Besides, back in the day, Apple was quite serious about server equipment and server OS – they had separate CDs with server OS – yep, CDs! Unfortunately, that didn’t last, and nearly a decade later, Apple started losing interest in it. After another decade, Apple no longer had server OS or interest in servers.
I’ve been running the Mac mini on OS X 10.13 “High Sierra” for the past six years, past all the releases that no longer support “old” hardware, and I guess quite insecurely. Luckily, I don’t expose my server to the outside world. One of the biggest reliefs was Docker, which allowed me to expand the capability of “High Sierra” and prolong its service. Anyone who has ever used OS X knows it’s really easy to use features and services – smooth sailing. But the moment you want something that doesn’t come with OS X, get ready for some pain and uphill battles – for example, built-in Apache with a PHP module. Fortunately, Docker sidesteps all of that.
Unfortunately, this year, good times came to an end – Docker received a breaking update, and old Docker could no longer find/download new Docker images. Considering Apple releases a new OS every single year, it doesn’t seem to make much sense to support Docker development for a 5-year-old “High Sierra.” So, the time for a tough decision came.
Should I buy another Apple hardware or simply move to Linux and see how far I can get with Ubuntu LTS (Long-Term Support)? I wasn’t keen on buying a new Mac mini – the upfront cost of $1200 is something to consider carefully. The used option is a bit tricky since the Mac mini 2018 is already out of OS X support and, at the same time, can’t be upgraded – on the chip storage. So, I can buy a used pre-2018 Mac mini – which is out of support and hope that Docker will be working fine for a while. Gambling is not my strong suit. Besides, the long-term goal is to move to Linux and different hardware (perhaps Raspberry PI) – so user-friendly Ubuntu it is.
Before installing Ubuntu, I had one last gift to give my already old Mac mini server – SSD drives all around (main and secondary drive). The main drive in the mini was so old that the paper sticker started disintegrating into dust in my hand – an impressive 12 years of service, considering it started giving some trouble recently. The Ubuntu installation was straightforward, and the OS runs fast, blazing fast. I guess 4 cores i7 and 16 gigs of RAM is still a pretty decent setup.
Overall migration went ok, I managed to hit lots of troubles with data transfer, but it was my own mistake – I didn’t prepare, and in my deep ignorance, I thought that Linux and OS X file systems know how to talk to each other properly. Then naturally, I hit issues with permissions and some other small stuff. Once file permissions got straightened out, the only big hiccup was Samba service – which as I learned 6 hours later, does NOT advertise its presence on the local network, like OS X does – silly but yeah. Everything else was more or less ok, thanks to a friend of mine, who knows his way around Linux. I managed to complete the entire migration from start to end in 3 days. Not a bad result, considering I spent nearly one day on data transfer and another day fighting for nothing with Samba service – well, you live you learn.
I’m very excited about Ubuntu; after nearly 2 decades, I’m finally on Linux for my home server. I can definitely say that Ubuntu has progressed a long way. I don’t recall it being this well-refined out of the box before. I’m sure I’ll have to learn some more about Linux and go down to the command line and edit configs with Nano, but hey, in some cases, it is easier than OS X. For example, crontab is so easy I had to ask a friend a couple of times to make sure that I didn’t need to do anything else (OS X requires more work to achieve the same). Backups on Ubuntu are pretty good as well, especially I got impressed by Timeshift. It needs a little bit of configuration out of the box, but it looks a lot more powerful than Time Machine – I mean you make a snapshot, then mess with the OS as much as you want, and then you can rollback everything, including OS updates and configurations – wonderful.
Anyways, the last upgrade to my Mac mini server 2011 is complete, and now I’m wondering how far will it make it. Will it last another couple of years or all the way to its 20th birthday and perhaps beyond? Time will tell.
Simply Self-Hosted Bitwarden for Local Use
I find password managers to be extremely convenient, especially when they can be easily synced. However, after last year’s security breach at Lastpass, I decided to reevaluate my use case and strategy going forward. Changing over 150 passwords gave me plenty of time to do so.
Requirements:
- Password access and sync
- Browser-based plugin
- Local network use only
Optionally:
- Remote access
I’m not going to discuss Bitwarden or cryptography in depth. Firstly, there are plenty of reviews on different password managers available, and secondly, I don’t have much knowledge on cryptography. So let me share my rationale: cloud-based solutions are very convenient, and I’m sure every password manager out there is doing their best to protect your data. Unfortunately, security is not an easy matter and, let’s face it, everyone makes mistakes. Lastpass made a few mistakes, and now I don’t know when my metadata and/or passwords will surface. So, the only question I had to ask myself was “Do I actually need to take the chance again?”. My answer was “no” and here’s how I achieved it.
- Host Bitwarden on your local network – choose a machine and give it a static IP (ex.: 192.168.0.2).
- Use Docker and the unified deployment method, note that the unified deployment is still in BETA.
It took me some time, but I managed to create the simplest docker-compose file that actually works:
version: '3' services: bitwarden: depends_on: - db image: bitwarden/self-host:beta restart: always ports: - "8080:8080" volumes: - ./bitwarden:/etc/bitwarden environment: BW_DOMAIN: "bitwarden" BW_DB_PROVIDER: "mariadb" BW_DB_SERVER: "db" BW_DB_DATABASE: "bitwarden_vault" BW_DB_USERNAME: "bitwarden" BW_DB_PASSWORD: "db_password" BW_INSTALLATION_ID: "get it from bitwarden.com/host/" BW_INSTALLATION_KEY: "get it from bitwarden.com/host/" db: environment: MARIADB_USER: "bitwarden" MARIADB_PASSWORD: "db_password" MARIADB_DATABASE: "bitwarden_vault" MARIADB_RANDOM_ROOT_PASSWORD: "true" image: mariadb:10.6.11 restart: always volumes: - ./data:/var/lib/mysql
Here are some important limitations to consider:
- Email confirmation will not occur since I don’t have an email server and don’t see the need to set one up
- You will have to use the Bitwarden Web app to import data, as it can’t be done via the browser plugin – more
- Bitwarden Web only seems to work from localhost, otherwise you’ll get an error that says this.subtle is null
Once you have set everything up, the rest is smooth sailing. The Bitwarden browser plugin doesn’t seem to care about the IP address, and it works great. If I need remote access, instead of exposing Bitwarden through a reverse proxy, I would prefer to use a VPN so I can log into my home network and access Bitwarden that way (I think it is safer this way).
I hope this idea is useful, and that Bitwarden will fix some of these limitations in the future.
Cheers.
Teaching programming to a kid
I have been slowly teaching my kid a bit of programming. Programming is not easy, and teaching it to a child is quite a challenge, so anything that makes it easier is welcome.
Initially, I have been using Scratch to teach programming, however, I moved away from it because it is not really that easy to use once you want to make something a bit more complex (even I had some issues following online tutorials) or teach a kid about some programming concepts such as for-loops.
Next, I tried Swift Playground, it is awesome, however, I got stuck on explaining for-loops. It might be easy for grown-ups to get a grasp of syntax and associated concepts, but for a child, it is a challenge.
I have been thinking about what to do next. Python? Well, maybe it is a good direction, but again syntax will get in the way of learning programming concepts… Today, I discovered Hedy and it looks very promising.
Checkout GOTO2022 talk:
5 topics for yearly knowledge refresh
Recently one more senior developer decided to leave my team and the company. The event got me a bit sad, not only the team is loosing a good developer but it also means new developer will be joining the team and that means teaching the developer all the ropes.
I have been through this few times now and its really starting to get to me. It takes time for a developer to learn how to write clean code, test drive, refactor, not to mention learn all the ins and outs of the company’s systems.
In addition I keep noticing that a developer can take expensive courses, lets say on TDD and still lag behind – missing tests or writing too many. That got me thinking, is it possible to improve the situation by creating yearly refresh courses? Effectively new developers get to know all the essentials of development at the company and present developers get to fresh up on the existing practices and perhaps come up with improvements (or trash something that no longer brings value).
So here are 5 topics:
- Clean code
It is important to learn and practice writing clean, easy to read and follow code. Clean code is a foundational knowledge, it effect all other practices in very fundamental way (from production to test code). - Unit testing and TDD
Testing is the prerequisite for continues delivery. Every developer must understand the value of testing and how it enables continues delivery. TDD is the best developer technique for writing valuable tests with the maximum reasonable coverage. Tests is code, it requires maintenance, tests must bring value and TDD is well established technique for doing so. - Refactoring
No one ever designs or writes perfect code. Moreover no one has crystal ball that predicts future business needs. Refactoring is important skill for continuously changing, adopting and improving code and system design. - Higher order testing
Beyond a system boundary, there are more systems. Developer must understand techniques and tools that are available for Integration, Contract and E2E Testing. Pros and cons must be weighted carefully in order to provide meaningful automated testing and short lead times. - Pipeline and environment
Software systems no longer built locally and run on bare metal. Pipeline builds systems and those systems run in virtual environments. While developers are not DevOps (and probably will never be), it is important for developers to know how pipelines are developed, employed and maintained. How systems are packaged and run in docker under Kubernetes.
Premature optimization, value and waste
Ah, premature optimization, every developer sooner or later hits that. You optimize code, iron it out so there are no extra cycles, no extra memory or what not and not terribly long after you have to take that carefully tailored code apart, just because a new requirement came in. Worse yet, the realization that the optimization was useless in the grand scheme of the app, yes the app works more efficiently but it does not affect anything at all. Some might be proud of the craftsmanship, others might be disappointed with the waste, in any case I’m not here to judge.
I am here to share a story about premature optimization but at a much higher level – a feature level. A few years back when I started writing my app I wanted it to have a feature, let’s call it “frequency determinator”- feed data to the app and it can determine how often an event occurs. In my mind, it was very cool to feed the app data and automagically determine a pattern. Well, I started with the determinator code, it was simple but working. I said “great, it is time to apply it to real data that I have”. But there was a problem, I didn’t have an app, it was just code for frequency generation, I would like to be able to upload data from my phone and see the magic. Ok, no problem, I thought to myself, I just need an UI. I wrote first version of UI, which looked very basic and was incredibly confusing to use at times even to myself. So I rewrote it, added few features, just so the app can be a bit more useful to me. I thought to myself: “well, now that I have UI would it nice to have this and that, I’ll hook up frequency determinator in next little while”. I showed my app to a friend but UI was still confusing. Ok, no problem, I rewrote it again and added more features. Since I already had some data in the app, it was becoming more and more useful. Along the way I did some more refactoring, added just few more features, changed the UI a couple more times and the app was shaping up more and more. I was happy, the app alleviate big chunk of my anxiety and with each new feature, it was becoming more valuable and refined.
Is it there yet? Nope, I still need to rewrite the UI, since there is lot of room for improvement in usability. I can definitely use couple more features and some additional features that I believe can reduce some of my anxiety. So when is “frequency determinator” coming in? I don’t know! As I was using the app, refactoring and adding features, I gradually realized that time for “frequency determinator” hasn’t come yet. There is no need and from the feedback I got, it might not be ever needed. Something that looked like great feature, a centre piece of the jewel, turns out otherwise. Once you use the app, you realize where the value is for you. Once others use the app, you see move value around. The value doesn’t exist in vacuum of your own ideas, value only exists when someone points to it.
So, what now, stop wasting time and start looking for value? I wish I can say that but in reality my ideas of “frequency determinator” kicked started initial development. I might not have started writing the app without believe that “frequency determinator” will be the key to solving my problems. So is there a value in a waste? I believe there is, we “waste time” but because of it, we come up with ideas. I think wasting time looking at the sky or laying around on the coach is actually a good thing. But nothing will come out of waste alone, entertain ideas enough, build something, use it and see if the value is there for you.
Springboot test custom client, controllers and/or filters – a quick way
Recently I posted a question to the stackoverflow (please check it out first). Unfortunately I didn’t have time to explain – ‘why?’
Also I don’t think stackoverflow allows lengthy debate in the comment section. So I would like to make a quick explanation and hopefully have a debate in the comments.
Ok, so, why would you want to test service client and associated controller (springboot service) in the same unit test? I believe the case is fairly narrow and following conditions should apply:
- Microservice must come with an associated client, which is capable of executing all available endpoints (in my case: internal microservice policy)
- Client is complex!
- Serialization and deserialization of objects
- Uniform handling of errors (internal errors)
- Custom security (internal use)
- Custom compression
- Logging
- Controller is thin (just a delegation to a business layer)
- Limited/Inflexible build pipeline OR time constraint on unit test execution (let’s say if test takes more than 4-5 seconds)
Now here is a list of ‘usual suspects’ to why NOT test client and controller together:
- Service and client are separate ‘units’ therefore SRP and/or separation of concerns
- Client is simple and can be tested separately
- Controller can be tested separately
- Fast/flexible build pipeline and/OR no time constraint
I would like to defend my approach:
- Unit test is very flexible term – I believe developer/business can define what unit actually means. In my case, internal policy states that if I develop new endpoint and/or service I must provide a client, that will comply with company’s internal needs. So a unit of work in this case is client and associated endpoint/s, one can’t exist without the other – therefore one unit.
- Client is not simple at all. Luckily most of the internal logic is abstracted away, so I can reuse abstraction and focus on immediate things like: path params, path variables, method and payload – which should be tested.
- Controller in my case is thin one – meaning there isn’t much code there, typically one line – delegation to business layer. So I can test controller separately, but there isn’t much value. The value of the thin controller is in correct delegation and entry point (paths & params specified correctly).
- Build pipeline is important tool, however if it is slow, constrained and inflexible, it becomes major source of headache (and sometime creativity). If your test brings up service and in process takes up a port and 10-18 seconds to start up, well the test will be ignored/removed in the name of performance – no value in that.
I hope this reasoning (along with stackoverflow solution) will be useful and helpful for those in need. Please share your thoughts.
Thank you!
Angular RxJS unsubscribe as a feature design
After few hours of thinking and more hours of implementation, I finally proved my idea is workable. I found it be a bit intriguing and so let me share it. Before jumping to the matter at hand, I would like to briefly mention that:
- I don’t believe in end-2-end testing
- I prefer unit testing
- I like to test-drive my code
A few more details can be found here. On top of the above, I dislike using mocks, spies, stubs and other fakes to make testing “easier”. Now don’t get me wrong I do use mocks a bit, but mostly at a boundary (calls to backend, db and such). Let’s be reasonable, if code is a mess and/or legacy or there is a time constraint, options tend to dwindle and if you have to mock, you mock.
Now let’s see about my situation: I have simple app with navigation bar, located at the very top of the screen and top of the component hierarchy. The navigation bar contains search component. Below I have the main space with a router that happily swaps out different main components, depending where you click.
There is no simple way to pass data from the search component to presently displaying main component, unless we use observable, in particular Subject
observable. So we have search observable and a main component is subscribed to it. The question is: “how do we test unsubscribe in the main component when we navigate away and the main component is destroyed?”.
After some thinking, I realized that NOT every main component is searchable, in those cases search component can be hidden. The approach has elegance to it, since it will help with testing and at the same time enhance user experience by eliminating confusing search feature that doesn’t seem to do anything when non-searchable main component is displayed.
On the other hand, when main component is searchable, then search component must be displayed so user can search it.
Ok, so how do we go about killing two rabbits with one bullet? We will enhance the search observable so it will count how many subscribers it has. Whenever any main component subscribes or unsubscribes to the search observable, the count will go up or down. Next a bit of elbow grease to wire up search component to hide/show when the count is equals to or less than 1 and we are done.
The whole logic can be tested by checking if the search component is displayed or hidden via jasmine spec, by navigating between main searchable components and main non-searchable components. No mocks, no spies, just elegant, user friendly feature and code design.
War is Peace, Freedom is Slavery, Ignorance Is Strength, Scrum is Agile
Smart code
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Brian Kernighan