I haven't been coding lately. I've been going through a whole bunch of meetings and job interviews.
I also have a (self-imposed) writing deadline. If I don't put some writing on the web every two weeks, I have to pay almost $50. The money doesn't go to charity or anywhere good. Skipping is pointless.
I could try to write about technical stuff, but it'd be even more shallow than usual. It's best to write about what's actually going on. So, to commemorate the end of three weeks of hell, I'm going to share some of the stuff that's been working for me in job interviews.
The aim of any job interview, for mine, is to get inside the door, talk to people, and figure out how the company actually functions. All the stuff that will never be in the marketing spiel. Most of the useful information will be subtextual.
To that end, here are a few lines of questioning I find useful.
These questions are for diagnostics. They are not gotchas for technical purity testing. The idea is to try to triangulate the company's current technical and organisational circumstances, putting the salary and sales pitch into a little more perspective.
Please bear in mind I'm a mid-level software engineer interviewing at small-to-medium software companies, many of which are startups. My questions betray my own limited experience and biases, and are not intended to be universally useful. Feel free to use them as a starting point for your own document, if you like.
If this document helps you, I'd love to hear about it! However, please don't inundate me with your own question sets or to dispute any of my assumptions. I'm extremely tired of this stuff. I'm writing it down because it's now Done, and I don't want to think about it for another two years.
I've seen this quoted as the worst question you can ask. I'm not sure that's always true.
You wouldn't ask what Google does, but for smaller players, it can be essential. Public-facing websites are often inscrutable or misleading. A single sentence or a spoken paragraph can provide clarity in these cases. Often very interesting technical details will emerge from this question.
Make clear that you did your research, and it still wasn't clear.
Here we try to figure out how much technical debt and bureaucracy is involved, how people figure out what to build, and how they work together to build things.
Try to piece together the daily workflow. Do people submit small frequent changes to HEAD, or do they work on hugely divergent branches? Something else?
How regularly are changes submitted? How does this relate to the average size of a PR? Who does this process optimise for? What are the effects on velocity and error rate?
Do people opt in, or are suitable reviewers chosen somehow? How many people must sign off on a change?
There is a class of inane commentary that has no place in a code review. How do you figure out what is welcome and what is necessary? Is this encoded somewhere or cultural?
Unconstructive comments like "this looks wrong" can stop developers in their tracks. Does this happen? How do you stop it happening? How do you ensure your culture is respectful?
What is the testing burden to demonstrate correctness? How about clarity, documentation, etc?
(This often leads to an anecdote about a huge patch that LGTM.)
See if they wince.
To what extent will I be clicking around in the AWS console?
What's the happy path for releases?
They might subtly reveal whether this happens often. If it doesn't, what would happen?
How do I release in a hurry when it's truly warranted?
Usually orgs end up running some sort of Cloud Cron and some sort of automatic persistent service management. Does this exist yet? Should it? Does it work?
Are there guidelines or templates for common tasks, or is everything adhoc? Is it easy to flit between other people's projects? Does it matter?
There's more than one way to do it, but some ways are established organisational conventions. I like to see which battles have been fought thus far, and how the organisational knowledge was encoded.
Do all engineers have a say in those guidelines? Do engineers have a way to contribute to those guidelines on an ongoing basis?
Who decides? How are the different cases made? Can you recall such an occasion?
How many people weigh in before we commit to a design? How do you ensure the right people are involved before work starts?
How do you ensure people are actually listening to one another, and not talking past each other? What's stopping a headstrong asshole running off with a bad design? How have bad designs been handled?
Onsite water cooler / hallway track can have an unstoppable inertia, how do you ensure everyone agrees? Remote-first companies have better responses.
What kind of pressure are you under right now, and how do you feel about it? Is this normal?
How stable is the code, how good are the alerting thresholds, etc? Does this turn into laborious unpaid overtime?
Try to figure out if the company is financially stable and managed fairly. Ask follow up questions if things sound unusual. This can reveal what is actually valued. Compare against their claimed values. Compare with management's response to the same question.
The Joel Test is still needed, sadly. Nobody has quiet conditions. How do they cope?
It's illegal in some states to prevent this, but it's still good to ask.
I always want to be free to maintain my personal blogs and Twitter feed.
If there is scope for official technical blogging on the job, even better. If there's time allocated for that kind of thing, net positive.
Why / why not? The real response is probably in their body language, excuses and qualifications. Sometimes people are candid.
Here we try to figure out what is important to the company, how priorites are set and controlled, how scope is determined, and what happens when things go wrong.
What structure exists to determine what is worked on? Is there room to push back? Does scope get cut when necessary?
Some believe these should be scheduled like clockwork, others don't. Ask why/why not. This can be revealing.
I'm usually looking for a fairly regular feedback cycle with clear expectations and guidelines. This seems to vary a lot.
Beyond the job description, what does it mean to actually be good at this job? This can reveal a lot, such as whether they are looking for leaders or toilers.
Does it concentrate in old hands and on the back of toilet stall doors? Is there a culture of writing, talking, sharing? How long does it take to get up to speed and involved in the company?
Are they fair? Are they bastards? Are they predictable?
Compare the response to that of the engineers.
Do you have a good reason to maintain core hours?
If it exists, is it clear? What are the social expectations around WFH? Alternatively, is it butts-in-seats?
Look for things like conference policy, learning opportunities, speaking opportunities, internal talks, promotion tracks.
Are all of the above permitted for unrelated technical topics, within reason? Are you trying to keep employees in place, or to help them get where they want to be?
Here we try to figure out if we'll still be employed and working on the same stuff in a year's time. Ideally you'll be able to identify hucksters and psychopaths at this stage, though I never have.
Usually they'll have a roadmap for the company prepared, and will be happy to practise the boardroom sales pitch.
What would need to change for that roadmap to dramatically change?
Do some research beforehand. Compare and contrast. Are they sensible? Delusional? Would you invest your own money? Implications for stability and priority flux.
Will this person stand up to unreasonable customer demands, or let them dictate scope/timeline? In my eyes, a bad manager lets customers weigh down directly on the engineering team.
Sometimes this is not worth asking. You can often tell from a stroll around the office and the answer is usually not good.
Too tired to formalise these:
Don't read too much into this stuff, this is just a bunch of suggestions and starting points. Hope it helps.