Inventive modernization of an air compressor
Entertaining battle and interesting dissection of component failure
Testing sunglasses with UV light meter
How a residential pump, otherwise well made, is engineered to fail within a timeframe (18:45)
Tearing down an expensive dyson vacume cleaner and the numerous ways it can fail over time
“it hurts even if there’s juice on it” too cute.
]]>The next results were even more impressive. The participants were asked to listen to some stories and answer questions an hour later. Without the chance to rest, they could recall just 7% of the facts in the story; with the rest, this jumped to 79% – an astronomical 11-fold increase in the information they retained. The researchers also found a similar, though less pronounced, benefit for healthy participants in each case, boosting recall between 10 and 30%.
Fascinating read. Introducing quiet time after learning dramatically improves retention. But while a simple thing to do, it’s potentially difficult for people to do nothing for minutes on end. The article recommends 10-15 minutes, but i suspect shorter intervals would be effective as well, in the 3-7 minute range. If you’re trying to digest a lot of new information, doing more frequent and shorter rests may be more realistic. As what works best is bound to vary from person to person, it’s best to experiment with a stopwatch and see how long you really need.
]]>The KPTI patches to mitigate Meltdown can incur massive overhead, anything from 1% to over 800%. Where you are on that spectrum depends on your syscall and page fault rates, due to the extra CPU cycle overheads, and your memory working set size, due to TLB flushing on syscalls and context switches.
Ouch.
My Virtual Machines aged 5 years overnight; the performance hit has been more than the highest estimates of 30%. But I never thought the fallout could introduce penalties as high as 800%. Crazy.
]]>Google has said that it has tested the productivity of remote teams and on-site teams and found no difference in performance.
This deserves to be highlighted, as many companies hesitate to embrace remote work out of fear it will impact performance. Even if its a 1:1 for google, it makes a huge impact on the quality of life its workers experience.
Google doesn’t want its engineers in local coffee shops hooking up with others from other companies and plotting a Google-killer. The more it can keep its people company men and company women – the better.
The better for stockholders of Google.
]]>But there is one indicator you can use that tells you close to everything you need to know about a candidate, an indicator that most shops overlook when evaluating applicants…
All the code written is available to see - what the developer did or did not do. The mistakes they made, they fix they applied, it’s all out there to be seen by all.
Either reviewing and accepting PR’s or submitting them to other projects - you can gain insight to how a developer gives or responds to feedback, what they are like to disagree with or brainstorm with, and how they keep the ball rolling for work that may stall.
It means they are intimately involved in a problem space, and are working close with the tooling or technology that they see where it can be better. Or if it’s an entirely new project, it shows their ability to identify a problem and properly engineer and package a solution.
What features did they implement and which ones did they skip? Are github issues being managed, are PRs being reviewed? Is there any kind of documentation? is the code easy enough to reason about? these are all questions you can get insight into when looking over the Open Source work of a developer.
They are giving back in some way, having benefited from the works of other open source projects, they now seek to contribute back their own effort. But it also means playing a part in a larger dialog of technology, features and tooling. It means they’re able and willing to share their efforts with others and take the time to make their work available for the community. A small gesture, but one that involves a significant amount of work.
If you’re looking to hire developers, take a closer look at what their open source contributions reveal about who they are and how they work.
]]>Silver Bullet
Something that provides an immediate and extremely effective solution to a given problem or difficulty, especially one that is normally very complex or hard to resolve.
Delivering applications to users has always been tough, especially if you try to keep them up to date with changes and improvements. Advances in the web allows companies to deliver the latest version of their software directly to users. It’s allowed for the creation of countless E-stores, games, and desktop publishing software. But even still, it is not without it’s complexities. There are ever evolving issues of complexity, maintainability, & security to account for; in addition to feature development and staying nimble.
Before Ruby on Rails, there was coldfusion, custom PHP incarnations, VB.net, java and all other manner of crazy. I remember One of the first codebases I saw; a custom PHP app a fellow student in highschool had created for our AV club. I still remember the hardcoded SQL calls, complete with hardcoded credentials, embedded throughout the pages. Oh, how far we have come!
Sure, these solutions all render out a web page - one of the great levelers of the web - but working on them was often nightmarish. There were no tests, no staging environments, all development was done on the server and backups were a file copy. Naturally, these codebases decayed and became neglected but still used; We’ve all seen the horrendously outdated sites we sometimes need to use when interacting with the government.
As an industry we’ve identified a lot of best practices to help a codebase to be maintainable. Patterns that appear over and over again, or best practices procedurally.
Users have also come to expect native applications for their platforms as well, which can mean feature development takes 3 times as long to complete if you have custom codebases for each platform.
And lastly, productivity and effectiveness matter. Squashing bugs and creating new features needs to keep happening when you’re business is software or runs on it.
Rails makes developing web applications faster and maintainable. It places emphasis on developer happiness, and using tools & solutions that enable effective feature development. Once you have the environment setup, going from nothing to a web application you can actually start using functionally can be done with a couple of command line incarnations if your needs are CRUD based (most are!). Additional libraries (gems) let you further expand your app behavior, and equally you can roll your own solutions when called for.
It’s also not afraid to break things for the sake of improvement. Backwards compatibility matters for some frameworks, but in the evolving web it’s best to stay with what’s current, secure or performant as browsers and technology evolve. This attitude is also what makes Rails secure: doing the “secure” thing is the default behavior, updates regularly include bugfixes or advisors to be aware of. Rails also request forgery protection, SQL sanitation, File upload and more right out of the box. An important caveat here is that updates often require (or at least encourage) changing your codebase is at each update - but that’s the trade off you make to keep up with best practices.
But perhaps importantly to the point of this post, is that Rails lets a small team ship web and native experiences to all major desktop and mobile platforms, reusing most of the code that’s already been written. By wrapping webviews in native navigation and using caching (also provided by Rails) you can create a seamless user experience - free of weird interface delays or screen flashes; while still doing most of your feature development in the Rails stack and instantly delivering updates to users for all platforms. Rails lets you optimize your app to create fast response times - as fast as 20ms (the fastest i’ve seen in production)
Rails is also quite flexible, featuring a set of very well thought opinions, but always have the option to do things differently. And if you need to start building out an API, or doing realtime websockets, or delivering more javascript-rich experiences, or scale up to support a lot of users; Rails can happily oblige.
And those well thought out opinions, those sane defaults that Rails ships with? Makes for some highly maintainable codebases. Everything has a place, everyone knows and agrees (generally) where those places are and are not, and breaking the rules is easy unless it’s ill-advised (like strong parameter protections). Even code that you wrote years ago can still make sense - no small feat!
For Ruby on Rails to facilitate teams of all sizes such speedy delivery of tested & responsive applications across all major web/mobile/desktop platforms, makes it the silver bullet framework of development if there ever was one. The productivity, effectiveness and maintainability it enables remains unmatched in other development solutions in software.
It’s not every day that a single, common word can change who you are. At least I imagine it isn’t - I admit there might be people out there who see a word like “insulator” and become sheep herders - but I don’t think this is such a case.
What does it mean, to listen? the absence of my mouth not moving? that’s a simple enough definition, and the one I had for a while. But when you dive deeper, it’s more than that.
To listen is to be quiet - verbally, and in mind. You can’t listen when you’re being strategic in your head about how to prove the other person wrong.
To listen is to be intently curious and unbiased in what you see - no matter how much you agree or disagree. The desire to understand outweighs the desire to judge. When you have a question, it’s to know more, not to allude or infer.
And the best listeners are not tea cups you fill up with information - but mirrors that reflect what they observe - offering room for dispute on the facts, allowing them to better understand the other party.
Growing up, I never understood a particular word that gets thrown around quite a lot in daily life. Certainly a function of my upbringing and experiences - I always felt the word “love” was hollow, an empty threat of compassion. Many of the least favorite people in my life had loved me - and expressed so in very creative ways.
But i think love makes a lot more sense, when paired with cherish. If love is the theory, cherish is the action. Of course you love your wife, your kids, your close friends; you would do anything for them, and maybe you already do. But love without cherishment is a campfire with no warmth.
Ironically, A great way to cherish is also to listen.
I have a lot of goals, interests, ideas, todos, projects. I think a lot of us do. I’ve naturally accumulated them over the years in varying forms. Many I made progress on, and many more I did not.
But commitments. What do I have of those? I have a lease, which is a form of commitment, and a few other contracts and agreements.
But personally? What are mine? What have they ever been? It’s not a word thats been in my vernacular, as a descriptor for a given interest, project, or idea. I’ve had many interests, but no commitments.
What does it mean to be committed? It means accepting a fundamental change to who you are - changing your DNA as a person, flamboyantly. You have a mission and everything about you is geared towards it. The answer to every distraction becomes “I have a commitment”.
People who are interested join the army reserves. People who are committed join the army.
]]>Put the code (below) in ApplicationController
and set a before_filter like before_filter :install_strong_params, only: [:create, :update]
.
This code, when temporarily used within your project, will rewrite your controllers to use strong_params. It is not intelligent, and you may find you need to tweak the code or its output for it to work for your codebase; but it should carry you most of the way at least.
If you need to debug, uncomment the line that adds a params comment to the bottom of the controller.
Once your app has been upgraded, you should remove the code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Software happiness is the ability for an app to reach its full potential and maintain that state for years, ideally forever. It is its ability to remain agile in the short-term, as well as the long. And it is its ability to do this through the change of various hands of professionals over time.
Businesses often build or sell software that becomes critical to their survival. But far too they become neglected and unmaintainable, sometimes to the point of requiring rewrite or in some extreme cases closing of the business.
This is by far, the most important thing happy software needs, and lack of it is the leading cause of app health issues. Software that is not groomed and looked after regularly becomes black box or obese (usually both). Software needs someone to sit down with it and refine its inner workings regularly, if it is to remain nimble and agile throughout its years.
Software does not need unbounded amounts of love; But it does need a lot, and chances are good that your software could use more of it. These investments are harder to see the fruits of, and are not sexy. But without them, an app gone bad can take out an entire company.
As an app grows, it will usually acquire different needs and interests. Without guidance, these can cause serious damage to an app over time, and potentially make it too complicated to use. It is important, when designing a particular role or feature, to remain humble and curious. Far too often, people try to play the “my way” card. And while this card does work and has worked, it’s not the right card for most people.
By far, it is better to listen to what others say, and make informed decisions from that. Although this isn’t exclusively true either. Really, the issue isn’t playing the “my way” card, its knowing when to. Sometimes, you will know better. Other times, you won’t. It’s important to know the difference, and change tactics to suite.
Not adding features to software can be as deadly as adding too many. While it’s certainly possible that your app does not need anything else and has reached a state of “doneness”, chances are good that there are optimizations and tweaks that could be made that would improve the utility of an app. When tastefully done, the resulting improvements can place you at the top.
However, not all changes are welcome. By far, the least popular change to an app is with its design. While sometimes this does improve the app and make it better, far too often changes are made that only serve to confuse or anger users. Which leads us to the next topic…
It’s important to listen to what people say; the people who use your software, the ones who build it, and the market overall. They will often have valid suggestions or complaints that would be in the apps best interest to discuss. While how you react to their comments is the result of several factors, that you give them forum is the most important part.
actively not listening, has killed or at least effectively killed countless software projects.
Your well-adjusted software needs a purpose. Without it, it will meander off in different directions and likely not land anywhere useful. If you don’t know what your software does, needs to do or could do, it can’t get anywhere. Sometimes, wandering aimlessly leads to wonderful things; But usually it leads to death. If your software does wander about, it should do so with intent and measurement of performance and tuning. But once you’ve established what it needs to do today, you should think about tomorrow.
Security is a hard problem, and a hard sell. You could leave your front door unlocked tonight, and it probably wont matter. If you left it unlocked tomorrow night too, it still probably wouldn’t matter. But one night, you’ll leave it unlocked and it will matter. Or, you’ll leave it locked, and it won’t matter because someone broke in through your window.
The moral of the story is: If you do everything you can, you can stop the simple and easy security breaches; But at the same time, there many windows in a house, and you can’t protect them all. If you’re not taking a fort knox style approach to security, you should at least know whats at stake and how to react if something happens; though with software, there is the added need of detecting when something happens in the first place, as there is rarely broken glass found.
Keeping your software in shape means exercising all of its parts regularly; both together and in isolation. This ensures your software performs as expected and is free of bugs. And while you can manually sit and exercise your app, it’s usually prefered to write software that tests software; especially under heavy & active development. Writing these is the responsibility of the developers, and it is the responsibility of software parents to make sure it is taking place.
But sometimes exercises are redundant or pointless, and bugs are allowed to creep in. Or other times the test software is so poorly written, that it becomes unreliable or unwieldy. Or, maybe there are no tests written at all. Thankfully, All of these can be addressed using the love and listening principles, previously outlined.
It is tempting of creatives to push their boundaries. Artists want bigger paintings, writers bigger stories, actors bigger movies. Developers and Designers are no different in this. Developers want to use new languages, databases or techniques, designers want to create impressive and definitive designs with new frameworks or syntax. And while these can and do lead to some impressive things, they far too often become a disability and a hinderance to the app. Tried and true solutions and approaches are boring, and there is always the temptation for something new. But boring is what makes happy software, so one must strike a balance between the two.
Generally, it’s useful to think of such things as genie wishes: your software only gets 3 total. If you’re going to use a new framework or language, there should be a strong reason for it; one that current tried and true best practices are unable to solve. There are many fads in software, and its important to sort them out from pivotal changes that move through industries.
Happy Software has the best chances of reaching maturity and stability when developed by professionals. A professional is more than someone who can do the work; it is someone who can do the work well, and is striving to do better. While the amount of money paid is not a direct correlation to professionalism, it’s certainly a useful metric. You don’t get happy software by being cheap; you get it by investing time and money into its growth.
The professionals who build your software should be improving its maintainability and lowering the bar for other professionals to take over who are less experienced. The people who build buildings are not the same people who sweep the floors after its built, and the same holds true with software. As your software grows and matures, its needs will become more detailed and nuanced. Difficult problems will be solved, laying way for easier problems to be solved. Its important these transitions be able to take place.
Communicate your desires and establish with them how things will look in the next few months to the next several years or longer; both given current course and in an ideal world. Also Identify their involvement and how best to position for any transitions.
Sometimes inaction is the best action. If things are manageable and the risks are known and understood, sometimes doing nothing is the right call. Software happiness is not a short term solution, but a long term one. However, Software rarely exists in a bubble, and the slightest change to any one of hundreds or thousands of other dependencies can cause a chain reaction of failure that result in loss of revenue, customers, or data.
Creating a better future for your software is no easy task, especially if you are not involved in its development. professionals make mistakes, or underperform, and its hard to tell the state of things if you’re not in the industry. So whats the solution? Hire an advocate. Having an unbiased party involved ensures that all priorities are being met appropriately, and that the app is heading in the right direction for its short and long term goals.
I’ll work with you and your professionals to improve the well being of your app and make sure it gets the attention it needs to prosper in the short term and long.
Hourly rate is $225 per hour Weekly rate is $7k per week
email r.hire@ruru.name for more information.
]]>Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?
The answer is yes, but why? Why does switching have a higher rate of success?
Given a choice of 3 doors, its fair to say that your odds of being right are 1/3. When the host reveals a door he knows to be a dud, this does not change your original choice success rate - you still have a 1 out of 3 chance. But there is now a 2/3 chance that the other door is the one with the prize. In essence, you’re betting that your original guess was wrong (it only has a 1/3 chance of being right) and that this other door, with all others doors eliminated and yours not very likely, to be the better choice.
This is much more apparent with something like 100 doors, because your original chance was 1 in 100 of picking correctly, but the chance of it being the remaining door out of 99 is much more likely.
Thankfully, this isnt that hard of a problem to create in code. Feel free to paste this into a ruby console and experiment!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
Get it here:rubygems.org and github.com
I wanted to cover the important takeaways and how I got there, and this is largely geared towards that more than every detail of the proccess.
I recently switched my development enviroment over to a linux virtual machine - something I intended on blogging about a few months down the line after the honeymoon phase has passed. Everything has been pretty decent so far, except for a slow down in my test runtime. The test suits for the Ruby on Rails apps I work on aren’t normally fast on a good day, and were a little more sluggish in my VM. So I wondered if there was a way to speed up my test time locally somehow. I tried a few solutions but none of them really made much of a difference.
I thought about what people have classicly done to speed up test suits. Some people have gone so far as to decouple their code from rails and test in isolation - but thats controversial and nontrivial. I tried paralleism and got a marginal benefit, but not nearly what I was expecting. Eventually, I just wanted to copy my virtual machine and switch between a few of them to execute my tests. An idea I quickly disspelled, but it led me to my first Ah-Ha…
The tests we write need to be executed to ensure our code works. Some Ruby on Rails projects can run their entire test suite in a few seconds, but many rails projects that I work on do complicated browser testing which just takes time to run through; sometimes a really long time.
CGI animators are not unfamiliar with the problem of waiting for computers. They can work on scenes that would take a normal computer hours or days to try render a single frame for. Their solution is to break up the work across many machines and have them all work on a little piece. It’s basically the same thing only here, instead of rendering CGI animations, we need to render test results.
In practice, this works well. But not everyone can afford a server farm to have ready for them to run tests for! I know I can’t. There will need to be a way to start things up on-demand…
With Digital Ocean, I can take a snapshot of a machine and turn that into an image I can then later use to create new machines from. The new machine has a new ip-address and hostname, but otherwise everything else is the same.
And with their API, all I need to do is create a few machines with a predefined image, and I have an on-demand server farm. The costs are pretty good - $0.175 for 25/machines/hr; worth it if it can shave minutes off my testing time and I’ll be doing a lot of testing.
Its worth pointing out that machines take a few minutes to spool up - especially if you create a bunch at once.
So, I create a few machines to execute tests for and break up all the files in the test folder to run across the machines. I notice a good order of impovement in testing time - enough to confirm my suspicions that it it’ll work.
Some machines got stuck with all the slow tests, so I shuffled the file ordering before assigning them to machines - so each got a few different kinds of files to work on, which should even out the overall test run time.
But the tests still weren’t quite as fast as I’d like them to be. Some files would have a lot of slow tests in them, and whatever machine got stuck with that file inevitbly slowed down the test run time.
This lead to…
By issuing specific line numbers instead of whole files, slow test files could be torn open and their slow inards spread across several eager machines, resulting in a good deal of improvement in the time it takes to test.
But the tests still felt slow. Even though I broke open these slow spec files, their sluggish guts now polluted the test time of the rest of the machines and prevented them from finishing faster. I needed a way to break them up but isolate them from the rest of the machines. Actually, I wanted to be able to break anything up - specs, files and whole directories. This all lead to…
With clusters, I could give every slow acceptance test its own server, or assign all the helper tests to one server particularly. With some tuning of what got divided up and how, I was able to achieve a testing time that felt reasonable - It was as fast as my slowest spec took to run.
After this, it was pretty easy. I would run the rspec command with a json output formatter, and then parse the results from each machine and display them as the overall test results. I went to bed pretty happy with what I had accomplished with clusters.
But there was a problem… I forgot to destroy the test machines I created. It ended up costing me about $2; but it was a lesson that lead to…
While one could query the Digital Ocean API for machines with created_at
that is too old and destroy the results - it means you’d need to have a machine out there doing that for you. You could use your development machine, but if you shut it down and go home for the day those machines will keep running.
Instead, it would be good if the machine could just clean up after itself, since it’s already out there running. But this needs to be flexible - if I have a solid day of testing, I don’t want to have machines destroyed that I’m working on. Equally I don’t want to set it to be too high and spend money on machines I’m not using.
So, the metric I settled on was uptime. if I set a lifetime of 4 hours and I’m 3 hours in, I can do a quick reboot and have another 4 hours. I could also probably do some bash trickery, or more simply change the config value in the file. Any of those is better than having to wait for another batch of machines to be created!
With this in place, machines that have been alive for too long can self-destruct, and I can go to bed and sleep soundly.
With that in place, I had the foundation figured out. The rest of it was squeezing it into a gem and settling on a command line parsing gem to use. I went with escort
; so far its holding up.
Aside from more providers and overall polish, I think this gem could do more for Ruby on Rails testing overall. We often write code but have no idea about how it will perform in production. Technically the exec
command in the gem already makes it possible to run a command across all the machines - which could be a command to hammer your staging server with requests for a few seconds; but I think something more intergrated could be achieved too.
Also, I feel like the machines could be driven harder, but I haven’t had much success in doing so. Ideally i’d cut the number of machines I need to test in half - but its a minor pain point.
I don’t know how useful this gem will be in the run of things - i know i’ll use it when I have to test a lot, and maybe there are folks out there who have to suffer through a slow test suite and this will really help them - I released it as a gem with hopes that I am able to contribute something useful back to the community.
]]>If you take a look at the documentation you’ll find test doubles described as:
A test double is an object that stands in for another object in your system during a code example.
I don’t know about you, but thats not very descriptive. So, lets take a look at an example using some pseudo rails code
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
In the above example, we have defined a User
, a Subscription
, and an Account
class. We want to test User
, so at first glance you might instantiate your objects like this in test
1 2 3 |
|
This will work, but its a bit much. We are just testing User
and we’d like to test it in isolation. Looking at our example, we see that for a User
to know if it can_login?
, it must ask its Account
which goes on to ask the Subscription
. We could stub the Account#enabled?
method and remove the need for subscription like so:
1 2 3 |
|
This is better, as we no longer need to create a subscription; but we’re still instantiating an Account class and everything that gets associated with it. Do we really need a real Account object, or can we use something more lightweight? this is where test doubles come in. They’re a bit like stunt doubles, standing in for the real actors when the real actors aren’t actually needed.
A further refactoring would look like this:
1 2 |
|
Now we only have one real object to worry about! calls to user.can_login?
will be delegated to the fake_account
, which is set to return true for is_verified
.
Its important to note a few things:
If you’re concerned with the method your stubbing actually existing, use verifying doubles(new to rspec 3) to ensure that the method you stub actually exists on the real version of the object. You’ll get an exception if you try to stub something that doesn’t exist.
There are a few types of doubles, covered in the above documentation. This allows you to double not only instances but classes as well, along with a few other testing tricks you might need to resort to such as working with singletons.
Do not stub or double what you’re testing! Use it when you want to isolate something like a method and test its various conditions, without having to actually manipulate the far reaching objects and logic it relies on to arrive at the same conclusion. Assuming the things you stub or double also have unit tests associated with them, this should not be a problem.
You still should have some test coverage that exercises the actual operation of all the parts together. But often you can do this in an integration spec and verify that a lot of things are working properly with a few lines of code. Unit tests are a thousand little strokes of the testing paint brush, integration tests are 100 broad strokes.
Anyway, hopefully this helps.
Special thanks to Kevin Skoglund (@kskoglund) for helping me make heads of doubles. He’s got a new lynda.com tutorial coming out on rspec 3 which you’ll be able to find here once released.
]]>You should always write tests. The only time its ok to not write tests is when its for public facing, non-trivial code. Or if you don’t feel like it.
A good method is 2000 lines of code
A better method is 200 lines of code, all in one line.
Sprinkle the word “TODO” as a comment throughout the codebase, but leave it to others to figure out what its for.
Instead of deleting code you don’t need, comment it out. But make sure you put “TODO” in a comment above it.
In fact, be proactive about scary commented out code. Leave User.destroy_all
as a comment in the user signin process.
Give things names that are deeply related to the codebase, but have nothing to do with whats actually going on.
A good codebase is like a puzzle; it should be difficult to assemble and be missing pieces
There is no such thing as too many relations. a user
should has_many :last_names
“Set it and forget it” is more than just the slogan for a late night cooking infomercial, its a design philosophy.
Good conditionals should read like
1 2 3 4 5 6 |
|
I should disclaim first and foremost: There are many ways to waste your time on a computer, and reddit is just one of them. For me, it was by far the biggest (second only to large database imports in dev).
I’ve been a redditor almost since it started, and it’s been interesting to see the site grow and change. But I’ve come to question its place in my life. From a community perspective, I’ve never quite drank the reddit cool-aid. Any attempts to share legitimate content are often met with downvotes, disinterest, or my favorite: being told you can’t post here / wrong sub. And even the most reasoned comment can elicit a vile reply. I think that’s a clear failure of the platform as a whole.
From a consumption perspective: there are some very informative / creative / interesting subreddits, and I can’t say it’s always been bad there. But I find in my browsing there might be one article that I find useful, or post worthwhile(expands my reality in some meaningful way). Actually, I think the subreddit I gained the most from was r/TwoXChromosomes/, to get a glimpse at what the world looks like for women. But otherwise, I’m just foraging for novelty, and I can’t say the time I spent on it was actually meaningful or providing me much return.
And lastly, my faith in reddit: As a technical platform, they’ve had to scale and I appreciate that challenge. But having been a moderator and a regular user, they have a long ways to go, and have had a long ways to go for some time. But equally they as an organization have had some disheartening news as of late (and blatant CYA behavior, in the name of free speech to add insult to injury) and I can’t help but see it as part of a systemic failure, and not merely a simple mistake. And trying to engage in meaningful discourse with a manager at reddit over twitter was met with the same tone of response that I’d expect from an actual reddit comment.
And I think that’s really when it dawned on me: The company is a reflection of the community, and the community is a reflection of the company.
Meanwhile, I’m in the process (a little over 1.5 years) of learning the piano & music theory and being guided along by a piano teacher (who I found on reddit). So with all the above factored, it seemed like a no-brainer. I’d rather go to bed having spent an hour on the piano, then an hour on reddit. Also, I have measurable gains from piano practice, but the benefits from reddit are often seem cloudy or absent.
So, I added reddit to my /etc/hosts
and it now resolves to my side project. It’s been amusing seeing how many times I knee-jerk to that URL, only to go “oh yeah…”
To stay connected, I still browse hacker news and slashdot, but that takes only a fraction of what reddit would normally represent. And If I really want to check-in on a particular subreddit, I can do so on my phone; but that’s a more painful experience for me and not something I do regularly.
I encourage everyone to do the same: kill the time wasters that offer little or no return. Maybe for you its something like WoW or facebook. If you feel more connected with the reddit community but still find it takes up too much time, there are browser plugins, like leechblock(firefox) which can help control the amount of time you spend on a given site.
Also, pick up an instrument. $genders_of_interest
dig musicians.
And fuck reddit.
]]>pjax_rails
gem, and useful advice if you need additional content delivered besides the yield
result.
I ran into this issue and my googlings returned little information, so I PR’d additions to the pjax_rails
gem readme, and thought I’d
make a post while I’m at it for anyone who runs into this issue in the future.
When you add pjax_rails
to your project, it ensures every pjax request does not render your layouts, and only the relevant view.
But you may run into the need to include additional content, such as a contextual menu, outside of the particular views.
You can specify a particular layout to render for pjax requests, by overriding the pjax_layout
method in your controllers.
1 2 3 4 5 |
|
And also creating the relevant file app/views/layouts/pjax.html.erb
.
You’ll likely want application.html.erb
and pjax.html.erb
to render the same content, so I recommend moving that content (the data-pjax-container
element and its children) into a
partial, and then rendering that out in both application.html.erb
and pjax.html.erb
.
As a tip, I add a h2
title to pjax.html.erb
when I want to debug what’s pjax and what’s not.
A great tool for git users
To try this out on your own, put this in your terminal:
1
|
|
or create an alias for it in your ~/.bash_profile
1
|
|
See the post for this trick here. You can find other cool tricks on commandlinefu.
]]>Steve didn’t like the status bar and didn’t see the need for it. “Who looks at URLs when you hover your mouse over a link?” He thought it was just too geeky.
Important to remember that how you approach something is not the same way others approach it. Good software has an adjustable grip, whereas poor software assumes all users are the same.
]]>