What's your most controversial programming opinion?
Asked Answered
D

167

363

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

Derr answered 2/1, 2009 at 13:14 Comment(0)
S
872

Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)

Stephen answered 2/1, 2009 at 13:14 Comment(0)
T
769

The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?

Thyrse answered 2/1, 2009 at 13:14 Comment(0)
F
710

"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)

Florina answered 2/1, 2009 at 13:14 Comment(0)
W
710

Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.

Woolgrower answered 2/1, 2009 at 13:14 Comment(0)
F
693

XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents

Fart answered 2/1, 2009 at 13:14 Comment(0)
B
678

Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)

Brubaker answered 2/1, 2009 at 13:14 Comment(0)
E
612

I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.

Elo answered 2/1, 2009 at 13:14 Comment(0)
N
539

If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.

Nitro answered 2/1, 2009 at 13:14 Comment(0)
D
535

Performance does matter.

Diamagnetism answered 2/1, 2009 at 13:14 Comment(0)
G
488

Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)

Gorton answered 2/1, 2009 at 13:14 Comment(0)
R
467

Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.

Roo answered 2/1, 2009 at 13:14 Comment(0)
S
465

1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.

Swarm answered 2/1, 2009 at 13:14 Comment(0)
T
439

Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?

Troublous answered 2/1, 2009 at 13:14 Comment(0)
T
383

UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.

Twelfth answered 2/1, 2009 at 13:14 Comment(0)
G
380

Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?

Gambit answered 2/1, 2009 at 13:14 Comment(0)
B
354

Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.

Basset answered 2/1, 2009 at 13:14 Comment(0)
S
341

If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.

Souter answered 2/1, 2009 at 13:14 Comment(0)
B
331

The use of hungarian notation should be punished with death.

That should be controversial enough ;)

Burkes answered 2/1, 2009 at 13:14 Comment(0)
O
287

Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.

Oops answered 2/1, 2009 at 13:14 Comment(0)
O
274

Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.

Outspoken answered 2/1, 2009 at 13:14 Comment(0)
S
266

PHP sucks ;-)

The proof is in the pudding.

Sheng answered 2/1, 2009 at 13:14 Comment(0)
S
262

Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.

Sherrisherrie answered 2/1, 2009 at 13:14 Comment(0)
B
256

Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.

Botsford answered 2/1, 2009 at 13:14 Comment(0)
G
235

It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.

Glenglencoe answered 2/1, 2009 at 13:14 Comment(0)
C
196

Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.

Confab answered 2/1, 2009 at 13:14 Comment(0)
S
186

Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.

Souter answered 2/1, 2009 at 13:14 Comment(0)
T
184

I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.

Thyrse answered 2/1, 2009 at 13:14 Comment(0)
T
180

Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)

Tzar answered 2/1, 2009 at 13:14 Comment(0)
S
163

Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.

Stephen answered 2/1, 2009 at 13:14 Comment(0)
S
152

There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

Souter answered 2/1, 2009 at 13:14 Comment(0)
L
148

Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...

Labio answered 2/1, 2009 at 13:14 Comment(0)
S
115

A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)

Salvage answered 2/1, 2009 at 13:14 Comment(12)
The first one is not really controversial, at least not in the CS field.Boar
I disagree. I know many people studying computer science that think they are getting a degree in programming. Every time I hear whining about why CS programs don't teach everyone Java I offer up a pained sigh.Salvage
Java doesn't really teach you how to be a real programmer, since there's so much you can't learn with it. It's like building a car with legos.Unnecessary
I may agree with the first point, but saying that knowing only Java could make a programmer ..... that's a crime, punishable with death!!!Warehouse
Can you move your second answer to another post so it can be rated separately.Wage
I agree with "does not", but not with "is not supposed to". Where else in academia are you supposed to learn to program? There is no analog in software to the Engineering disciplies (mechanical, electrical, civil etc.).Aquamarine
@MusiGenesis: My local community college has an "Associate in Applied Science Degree" in "Computer Programming". (Washtenaw Community College) That is where I would go to be a programmer. It is important not to confuse Computer Science with Computer Programmer. They are NOT the same thingSalvage
@MusiGenesis: I've actually just completed my degree in Engineering (Software). I'm certainly not a computer scientist, and I don't want to be.Kutch
A CS degree is indeed not a programming degree. But then again, a programming degree doesn't make you a good programmer either. Both can introduce you to the basics and some special subfields, but it's up to you to use that as one of many sources of information as you develop your skills. Now, you may be able to solve any problem your work poses to you using a single language, like Java. But is it the best way? Learning several different languages and paradigms can help expand your perception of how problems can be solved using program code, and allow you to create better solutions.Latex
I disagree that CS does not teach you to be a programmer. It DOES and SHOULD do that - incidentally by teaching multiple languages, not one only - but that's not ALL it should do. CS degrees should also teach you about as many different areas of CS as possible, eg basic programming, functional languages, databases, cryptography, AI, language engineering (ie compilers/parsing), architecture and math-leaning areas like computer graphics and various algorithms.Dexterdexterity
Programming is easier in some fields than in others. Web development and most of the work you do in Information Systems is not hard. If you have a bit of a knack for programming, you can do this stuff very well without a CS or engineering degree. If you want to be a game programmer, write device drivers, work with embedded systems, or other things of the like, you'll need to know certain things from the degree.Bekha
I disagree. CS degree teaches you how to solve problems often using C/C++ (low level languages), it teaches you algorithm design, the theory behind OS, general algorithms used everywhere - all of these apply if you want to code. In other words, you get the basics - a foundation upon you can build by learning more languages. Knowing Java doesn't make you a programmer, in fact, it's the most ridiculous thing I have heard for awhile.Hemelytron
S
101

SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.

Strade answered 2/1, 2009 at 13:14 Comment(13)
I found it: Single Entry Single Exit !!Riancho
I guess, that in other words it is "function should have only one return statement" - never agreed with that one.Constipate
Moreover, an exception is just another exit point. When functions are short and error-safe (-> finally, RAII), there is no need to follow SESE.Titian
Agreed. I cringe at the 100+ loc methods I've seen that carry a return value from the first line all the way to the bottom just to adhere to SESE. There is something to be said for exiting when you find the answer.Characharabanc
Totally agree on that one, I was about to add it onto this post, you beat me to it ;)Plumcot
Wait people actually do this? Why can't you just search for "return"?Tabshey
SESE is law in unmanaged code, but in managed code it isn't, some post somewhere here in SO explains it betterStockstill
I'd like to see that post, but admittedly, my opinion comes from a strict managed code domain.Strade
This might be useful when your debugger only has a maximum of two breakpoints. Very common in embedded hardware environments.Rowel
I think SESE is a great example of a solution in search of a problemJosiejosler
SESE dates back to 1960s and structured programming. it made a lot of sense then. single entry is pretty much guaranteed today, clinging to single exit just betrays low iq.Paction
It only makes sense if it's SESRP: Single Entry, Single Return Point. This was important in languages like BASIC where you could GOTO here, there, and everywhere. Better practice was to always return where you came from, using GOSUB instead of GOTO. With modern programming languages this isn't so much of an issue...which seems to be how the sensible "return where you came from" morphed into the awful "exit from only one point of the method".Geronto
I was running PMD on a project and came here to post this after an annoying set of 'OnlyOneReturn' point violations popped up.Anorthosite
M
100

C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R

Messenger answered 2/1, 2009 at 13:14 Comment(24)
I would upvote if it wasn't for the "it's a horrible first language", I think it sucks but it's a good first language, particularly because it does suck, then one can appreciate the need for better languages!Warehouse
It's very difficult to create usable classes in C++, but once you create them, life is very easy. Way easier than using plain C. What I do is the following: I implement the functionality in C, then wrap it using C++ classes.Alva
The way I see it, a lot of misgivings about C++ stem from the fact that C++ is generally taught wrong. One typically needs to unlearn a lot of C before one can grok C++ well. Learning C++ after C never seems a good idea to me.Olathe
And I think that C++ is superior to C in every way, except that it unfortunately was designed to be “backwards” compatible to C.Literatim
I think C++ is a good example of "design by committee" done RIGHT. It's a mess in many ways, and for many purposes, it's a lousy languages. But if you bother to really learn it, there's a remarkably expressive and elegant language hidden within. It's just a shame that few people discover it.Ludovico
I've got another bone to pick with you: “You can teach C++ in two ways” – this is wrong. Apparently you have only ever used C++ in two ways, without unlocking its true potential. This also explains your microcontroller related experience: C is no faster than (well-written) C++.Literatim
+1: Of all the languages I've ever played with, C++ is the only one which has made me sick every time I've approached it. I've had a book on C++ for years, I pick it up every once in a while and tell myself "it really can't be that bad" and read until my eyes bleed, I've made it to page 47.Reveal
There is a third approach to learning C++: Accelerated C++ takes it. It builds from the very beginning (variables, functions) but using real C++ elements (STL). I recommend it for anyone who wants another view into C++.Dalury
@dribeas: I appreciate the recommendation, it looks like a good book. I doubt I'll ever be able to "appreciate" what C++ has to offer but if I ever recover from my previous experiences I will take you up on your recommendation.Reveal
Okay, if C++ code was ten times slower than C code, what sort of Mickey Mouse compilers were you using? Or what idiotic code conventions were you required to use? Were you asked to do exception specifications, for example (almost always a bad idea)?Rosebay
Just throwing this out there, but the Programming Language benchmark game has quite a few examples of C++ being faster then C.Liaison
"When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code." - who says you have to use classes, rtti and whatnot?Dalhousie
you don't have to use those features. if you only use the C subset, then C++ is equally fast as C. then, you can selectively pick those C++ features you like. some vector sugar here, some other stuff there. isn't that nice?Dalhousie
and i agree it's all but a nice first language. it's not wise to teach it first IMHO. and it's good that it's compatible to C. nuff said :)Dalhousie
I agree that it's got a whole raft of problems. but worst ever? Ever seen intercal? BFUNGE? assembly language?Diageotropism
Regarding your anecdote about C++ being an order of magnitude slower, keep in mind that C++ compilers of the '80s are not the same as C++ compilers of today.Wheaton
I agree that it's the worst language ever. Except for all the others.Nopar
I don't agree that its the worst language; I do agree that its a bad language; I also agree that its a bad first language. C++ is powerful and has a lot of features that are very useful. This makes C++ a good choice - sometimes. C++ also has a lot of hidden evil (lots of undefined behavior that looks perfectly fine..) which makes it a bad language and definitely a bad first language.Abbreviated
@david-basarab - C++ compilers are now much better! I use c++ not only for MIDI but for audio DSP algorithms - utilizing C++ templates makes it very powerful to make tunable compile time parameters such as buffer size and layout which allows for automatic SSE/altivec optimizations. The benefit of C++ now is not the language which is always a template-puzzle nowadays, but because the compilers available are better at optimizing real time functions than Haskell, Ada, Scheme and Scala areBernardabernardi
-1. C++ is still the most powerful multi-paradigm widely available language there is. It's the most adaptable of them all, therefore it can solve many different problems, which in some applications is very useful. It might not be best at each specific thing, but overall, it's seldom a really bad choice.Constant
C++ is like Democracy, "Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time." -Sir Winston ChurchillKahn
C++ is massive, and massively popular. Like all languages, it has applications for which is it well suited, and applications for which it is poorly suited.Bekha
+1 For second language. I learned Java first and a bit of C one year later. I'm glad I learned the low-level C stuff because it makes me a better high-level programmer, but I'm also glad I didn't have to start with C.Cudweed
What about Objective-C? And I totally agree with you, Huntrods.Nesmith
G
94

You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret

Geronto answered 2/1, 2009 at 13:14 Comment(25)
I know how to type (was an army teleprinterist) but I insist it makes no difference whatsoever.Pistoia
Nemanja->"no difference whatsoever"?! I just got 70wpm on an online test. I could see how someone could scrape by at 20-30wpm, but if they are using two fingers, plugging away at 5wpm (yes, I've worked with people like that), it's holding them back.Sawhorse
No difference whatsoever. I don't even know what is my current wpm level, because i completely lost interest in it. Surely, it is useful to type quickly when you are writing documentation or ansering e-mails, but for coding? Nah. Thinking takes time, typing is insignificant.Pistoia
Well, if your typing is so bad that you are thinking about typing, that's time you could have spent thinking about the problem you are working on. And if your typing speed is a bottleneck in recording ideas, you may have to throttle your thinking until your output buffer is flushed.Sawhorse
@Nemanja Trifunovic - I hear what you are saying but, respectfully, I think you are dead wrong. Being able to type makes a huge difference.Gerge
@keysersoze: I have never worked on a project when typing speed made any difference. Even when I write code from scratch and not fighting some crazy frameworks, a good editor makes typing skill almost worthless. With vim I usually just type a couple of letters before pressing Ctrl+P.Pistoia
@duncan: No hard feelings, but you are dead wrong - it makes no difference :)Pistoia
Even though I never learned to touch type my typing is very quick, and optimized towards writing code - not english. I always felt touch typists must be at a little bit of disadvantage, considering the heavy use of symbols in coding which touch typing is not optimized for.Vaasta
I know how to type. After twenty years of typing my index and middle fingers know where all the keys are, so I don't have to look down at keyboard all that often. But I had this argument in a different context long back: a colleague argued that camel case is [contd...]Semiweekly
[...contd] better then underscores because it is easier to type. My argument is that you are not supposed to write code at speed of typing.Semiweekly
I don't mind looking at the keyboard once in a while to relieve eye strain. You HAVE to change your focus at times. If you are a good typist, chances are you either have glasses or contacts.Gagarin
While I can't touch type and confirm this I do suspect that it helps. I have encountered many situations where slow typing speed gets in the way. Sadly learning is mind-numbingly dull. Yes, I know there are all kinds of fun games to help you, but it's still dull for me. Still trying though...Demanding
+1. I repeatedly see people make tons of mistake because they are watching their keyboard instead of watching the code on their screen. Most common are syntax and code-formatting issues, but also real bugs that aren't caught by the compiler.Sperrylite
You must be using some ridiculously verbose language like Java. Thinking is the bottleneck when programming, not typing.Tabshey
I agree here. Though thinking is important, watching the screen is key.Verified
I agree that thought is the limiting factor behind programming, but who codes from the hip so much that they design the software as they type it? While I'm coding/typing, I have largely already designed the software... as a result, my thinking easily keeps up with my 80wpm+ typing speed.Hushaby
I can't think faster then I type. I am hunt and peck, using six fingers and the thumbs. Problem is not that I wouldn't benefit from ten finger, but that trying to train it slows me down to much.Appall
The strange thing is that hunters and peckers are just a hair's breadth away from full blown ten finger typing. After using a keyboard for years you know exactly where the keys are - you just don't know where your hands are without looking. And that's only a little bit of technique. BTW: using a Kinesis contoured keyboard helps a LOT. And using an english keyboard instead of a localized one.Bibliography
@hstoerr: When I first took a typing course, in sixth grade, I cheated and looked at my fingers. I was the fastest one in the class, the star pupil. Only I didn't really know how to type. Luckily, in seventh grade, I took typing again and this time did it right. It's the only useful thing I learned in junior high. (Well, that and "Always carry your books in a backpack so they can't get knocked out of your hands and scattered down the hall.")Geronto
The way I look at it, if you don't know how to type, how much programming experience could you really have? So yeah, I think a good programmer is one who knows how to type.Twofaced
I disagree. I never took any typing lessons, but spending most of my life behind a computer has made me remember where all the keys are so I can quickly type without looking at the keyboard. Maybe my hands aren't placed in the optimal position as you would learn in a typing lesson, or I don't use a DVORAK keyboard, but my typing is fine. And I sure don't want to type faster than I can think.Impersonate
I generally type with 4 fingers or so and I've tested my typing speed - 90 wpm.Pergolesi
Since when does wpm matter when programming? Programming requires thought, not just mindless typing.Meany
Typing is mindless by definition. If you're not typing, but hunt-and-pecking, you're using up brain cells to type that you could otherwise be using to think about your program.Geronto
-1 for dead wrong: you don't need to type at all to be a programmer. Then, +2 for what it really means: you must know how to type to be a good programmer. When I interview people I'd pass immediately if they can't touch type.Ultimogeniture
C
89

Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.

Coquillage answered 2/1, 2009 at 13:14 Comment(12)
You are mistaken "lazy" for "clever". A clever programmer will actually have to work less, which may make him/her look "lazy".Premer
@Diego, tnx, changed it to make it more appropriate.Coquillage
@Diego: I disagree! The Term "lazy" as applied to programmers is something I've heard and used many times before. (I think I first read it in a article by Larry Wall) It is a badge of honor!Beet
lazyness is the fulcrum of all human advancements. if we were not lazy we would still be hunting boars with spears for supper.Antimagnetic
I like to say, "I'm not lazy; I'm efficient."Pozzuoli
I agree with what you're trying to say, but I disagree with your definition of lazy. A lazy programmer does not look ahead; they will copy-paste a block of code between 4 different functions if it's the easiest thing to do at the time.Dexterdexterity
lazy/clever programmer... Programmers have to be clever to be reasonable programmers, so that's a given. A lazy programmer picks the shortest/easiest path to the solution of a problem. And this is not about copy/pasting the same code snippet 400 times, but rather finding a way to avoid copying the same code 400 times. That way the code can be easily changed in once place! The lazy programmer likes to only change the code in once place ;) The lazy programmer also knows that the code is likely to be changed several times. And the lazy programmer just hate finding the 400 snippets twice.Unbreathed
Though I agree with your explanation Lazy it isn't really the best word to describe this. Lazy - Resistant to work or exertion; I know a lazy programmer that is too lazy to create a bat file to automate a simple task that I see him type out all the time. If he would just spend a little time to make a few bat files it would increase his productivity. It turns out he is a good developer however he could be even better.Kahn
For the most part, I agree. However in HTML coding this is not the case. Lazy HTML coders use tables for layouts, and lazy back end ciders cut and paste instead of using includes. Having just slogged through someone else's code, I am very much aware of this phenomena. shudderTiemannite
It's hard to tell whether programmers are the hardest-working lazy people on the planet, or the laziest hard-working people on the planet.Bekha
-1 . I'm VERY lazy + I never wrote tools to automate things because I never saw any value in them. Developing tools is a one time huge additional amount of work that no true lazy person will be able to commit to.Lomalomas
+1 for Seventh Element/Zuu. Lazy programmers = much code. Smart programmers = less + better code.Scutiform
L
89

A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.

Leilani answered 2/1, 2009 at 13:14 Comment(12)
"degree in Computer Science or other IT area DOES make you more well rounded" ... "realize that it all doesn't matter at the end, as long as you can work well together" <- sounds a tiny bit inconsistent and self-contradictory.Unguiculate
IT referring to the fact that the other guy has a degree. It's strange, once you have a qualification, you might stop comparing yourself to others.Leilani
Agree - qualifications are indicators of commitment. They can be more but if even if that's all they are then they have value. It is only those without pieces of paper who decry them. Those with them know the limits of their value but know their value too.Gerge
From past experience I'd generally rather work with someone that at least has an EE degree, than someone who came into the field after college.Vaasta
i would even say a good university degree. i met a programmer at my work who finished some small it-schoold i've never heard of and didn't know how many different numbers can be written on 8 bits!Benfield
A degree in ANY area (except maybe post-modern literary criticism) makes you a more well-rounded programmer, especially if it's in mathematics or science or engineering. Comp Sci and IT degrees tend to have incredibly narrow scope and focus.Aquamarine
In the spirit of healthy discussion I'll just say that I vehemently disagree (and I've got one). Past deliverables shows commitment, not that you lived somewhere for 4 years and read some books.Hushaby
I don't believe in degrees as measurements of value or skill, but studying at a university gives you the opportunity to learn the foundations of many different fields that can be useful to you in a work situation. I'm doubtful if being able to graduate is an acceptable proof that you've learned anything, but I know that you CAN learn a lot of useful skills, if you're ambitious enough.Latex
"What you have he can pick up in an instant" - Not necessarily. The ability to write good code is something that tends to come with experience, though some people pick it up quickly and some never seem to get there. The guy with the CS degree will certainly be able to pick up the languages and APIs you use in an instant, but there's no guarantee he'll ever be a good programmer. And he certainly won't become one overnight if he's not one now.Dullard
I learned far more from my college library than the classes them selfs.Kahn
Disagree - Self learning can be quite better than university learning. As for University, they make you think they way they want (as better marks for thinking their way). A self learner will think far better (for a given value of better) that a person teached to lern one way. I'm fascinated that you agree with me, btw: "You realize that it all doesn't matter at the end, as long as you can work well together."Reynold
As someone about to complete a degree in Information Technology (with a specialization in Applications Development, no less), let me assure you that it is a small step above useless for someone interested in software development. You're more than likely to learn UML and object-orientedness which is supposedly good, but beyond that you're on your own.Bekha
T
87

Don't use inheritance unless you can explain why you need it.

Thurmanthurmann answered 2/1, 2009 at 13:14 Comment(7)
Inheritance is the second strongest relationship in C++ and the strongest relationship in most other languages. It strongly couples your code with that of your ascendant. If you can just use it through interfaces go for it. Prefer composition over inheritance always.Dalury
Most uses of inheritance as a form of reuse, overriding whatever is needed to change. They generally don't know/care if they violate LSP, and can achieve what they need with composition.Thurmanthurmann
I tend to think that delegation is cleaner in most cases where people use inheritance (esp. lib development) because: - abstraction is better - coupling is looser - maintenance is easier Delegation defines a contract between the delegating and the delegate that is easier to enforce among versions.Carnation
He's not saying don't use inheritance at all, just don't use it if you can't explain why you need it. If you're wanting to code an OO application and think throwing a little inheritance in here and there is just gonna make it OO, then you're dumb and should be fired from the ability program.Maigre
Like many other programming constructs, the purpose of inheritance is to avoid duplicated code.Geronto
Or as Sutter and Alexandrescu said in C++ Coding Standards: Inherit an interface, not the implementation.Tragicomedy
You should expand that to: "Don't ever code anything that you can't explain." Everything you do in code should have a reason.Daystar
D
86

The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.

Decoration answered 2/1, 2009 at 13:14 Comment(36)
I agree. Not necessarily that we need more gotos, but that sometimes programmers go to ridiculous lengths to avoid them: such as creating bizarre constructs like: do { ... break; ... } while (false); to simulate a goto while pretending not to use one.Castano
Especially when you're taught what GOTOs are for an entire semester and how to use them, then the next semester a new lecturer comes along chanting the death of the GOTO statement in a folly of unexplained and illogical rage.Wideopen
I agree as well, one of my old lectures would go mental if you ever thought about using them. But coding to avoid them may end up being worse than using them.Commonable
I've used GOTOs in switch statements to have logic jump all over the place, and had no problem with it (apart from the fact that I got FxCop to actually complain about the complexity of the method in question).Brubaker
I have seen only 1 example of a good usage for the last 5 years, so make it 99,999 percent.Basal
I've never had to use a goto for anything. Anytime when I actually thought goto might be a good idea, it was instead an indicator that things weren't flowing properly.Pressure
No no no no no. So much production code is so wildly obfuscated and unclear already. You would be giving more tools to the monkeys.Contraindicate
I don't think I can come up with a single good use of GoTo in a .NET application... can you give an example of a good use of it?Vesica
Goto is very useful in native code. It lets you move all of your error handling to the end of you function and helps ensure that all necessary cleanup happens(freeing memory/resources, etc). The pattern which I like to see is to have exactly two labels in each function: Error and Cleanup.Fromm
The explanation I've heard is that GOTOs make the stack non-deterministic. If you got to a line with a GOTO, there's no way of telling how you got there. Makes debugging much harder.Dulci
As the years have gone by the need for GOTOs goes down and down as languages add constructs that remove the need for some uses. I'm down to about 1 GOTO per year now but there are times it's the right answer.Bradwell
Nice to see that this did indeed generate a great bit of controversy!Decoration
I find goto's are not very readable. I despise them in SQL, so why would I use them anywhere else?Eggshaped
@Jeremy, Can you do goto in SQL? SQL is a declarative language. Which db vendor has SQL that knows a goto?Riancho
@tuinstoel, MSSQL has supported it since at least 6.5. I use it a lot to begin, commit/rollback transactions in stored procedures.Eggshaped
@Jeremy, Don't you mean T-SQL instead of SQL?Riancho
To my knowledge in assembly/machine language all branching are forms of goto. What does your high level language get compiled into? Nothing wrong with the occasional "low level style" shortcut if it is done properly.Mending
Continue = goto for loops; Break = goto for blocks; switch = goto madness; Goto is obviously not a problem if used with some sense then. If you are using an OO language and you use Goto for Error and Cleanup then you scare me. RAII and counterparts should be considered your friends.Wage
+1 for controversy :). Oh, I know what GOTO's are, I started with BASIC like many of you. We need more GOTO's like we need DOS 8.3 filenames, plain ASCII encoding, FAT 16 filesystems, and 5 1/4 inch floppies.Mouldy
Just found this: stackoverflow.com/questions/84556/…Mindymine
A good example of goto: stackoverflow.com/questions/416464/…Unknown
I used goto quite a bit in C programming - generally as a finally block. I have a file handle I need to close, memory I need to free etc, so at the point where I would return early, I just set a return code and goto the cleanup: label.Injection
Gotos are also commonly used to code up state machines. You can use an enumeration, a switch statement, and a loop to achieve the same effect. However, all that really does is mask the true structure of your control flow (and slow things down a bit).Suit
Goto can be OK. My rule of thumb. If a good programmer, who doesn't often use Goto, is prepared to defend it - then it's OK. And it probably is a once a year thing if that. Dmitri, sounds like FxCop is right and you're wrong.Candide
This thread considered harmful. Edsger Dijkstra is rolling in his grave. :)Cobby
Agreed. I am struggling to translate numerical code from Fortran into F# because it lacks an efficient goto construct.Carding
The problem with GOTO's are that they are like giving a little alcohol to a recovering alcoholic. Incredibly dangerous for programmers coming over from BASIC who are unstructured happy.Feverous
People who think gotos are evil have never programmed in C, or if they have, they did it poorly. Gotos are the best way to do error handling in plain C, and repeating Dijkstras quote dogmatically only demonstrates ignorance. Please read this before complaining about gotos: eli.thegreenplace.net/2009/04/27/…Isauraisbel
To add on to catphive's point about using goto's in C, here's a discussion about gotos by the Linux kernel developers when one man jumps the gun on a goto and proceeds to recommend avoiding it at all costs: kerneltrap.org/node/553/2131Chapeau
Actually, the discussion of the use of goto in Linux made me change my mind if goto is indeed harmful in development. I've learned not just to trust what you've taught :).Pine
I needed gotos in C because it has no equivalent for Java's "continue loopname;"Month
I once got sent home from college for telling someone to use a GOTO :PTouchwood
Events are the modern GOTO statement. You arrive from anywhere, anytime, with extra baggage of data that GOTOs never had.Approachable
I've always learned not to use GOTOs because they create spaghetti code and are for the lazy (that if you do use them, something is wrong with your flow). However, JUMP statements, which are essentially GOTOs, are very useful in assembly.Impersonate
"They have a purpose and would greatly simplify production code in many places. That said, they aren't really necessary in 99% of the code you'll ever write." +2 if I could, sir, that could not have been written better.Pergolesi
Sorry but I'm very very glad to have not seen a GOTO statement since porting a QuickBasic program to C#. Give me a break statement anyday.Salleysalli
H
80

I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.

Horseshit answered 2/1, 2009 at 13:14 Comment(5)
100% right. If only the Python developers would finally acknowledge this and change their otherwise exceptional language accordingly. Thanks for posting this.Literatim
But there is already one statically-typed Python-like language. Tt's called C# ;-)Strident
C# is python-like? Maybe you meant Boo ;)Horseshit
If anyone says dynamic typing is more terse, just point them to Haskell =). I agree with all but your 3rd bullet point. Dynamic code often accepts parameters that can be one of two types. For example, Prototype functions accept either HTMLElements, or strings which you can use $() to look up to get HTMLElements. A good static typing system will allow you to do this =).Seacock
#2 is only true if you follow #1, which in my opinion is unnecessary. If it's clear what the code does, then it is correct. I have a code I use a lot that reads in data from a tab delimited file, and parses that into an array of floats. Why do I need a different variable for each step of the process? The data(as the variable is called) is still the data in each step.Wheaton
F
76

Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.

Felicity answered 2/1, 2009 at 13:14 Comment(7)
Is this controversial? I guess no! -1!Lidstone
the latter part is highly controversialToh
I use the excuse: " I am charging my time to Professional Development" on the grounds that I am learning something useful as a developer. Boss agrees.Allowed
i'm not geting paid to do anything now.just like hasen j.Mclane
I agree, but in my defense I've hit a wall and need a breather before tackling the problem again.Bekha
My friend likes to use the excuse: "I'm Compiling"Alterant
HAHA! Tell that to reputation monsters!Cadmarr
F
72

Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.

Fallible answered 2/1, 2009 at 13:14 Comment(10)
Generally when things are aligned in a columnar way it creates a maintenance burden for a developer. Ie aligning the data type and identifier in a method declaration... Line1(int id,) line 2(char id,) ... making sure the data type, variable name, and even commas all are in a column is a MESSDecided
it usually just takes a couple of extra keypresses, if that.I didn't go into too many specifics, but I usually only break it into two columns for alignment purposes (usually type - id). I have some other rules to ease the burden where parantheses are concerned. The biggest obstacle I have [cont...]Fallible
[...cont] is fighting against auto-formatting editors. In fact, unless it's easy to disable I usually give up in those circumstances and "go with the flow". But with especially verbose languages like C++ I still prefer it.Fallible
Interesting. I would like to see some examples. Do you have a blog?Cowpoke
Well, I have: levelofindirection.com (yes, it forwards to blogspot - the pun was intended), and also organic-programming.blogspot.com . However, you'll notice neither have been updated for quite a while - due in large part to vconqr.com ;-) [cont...]Fallible
[...cont] - and I don't mention the layout stuff on either. I'll consider myself prodded - again!Fallible
Code formatting matters so much, it doesn't matter at all. By that I mean that editors should always reformat code when you load it, and SCM systems should reformat to a canonical style on checkin. Then everyone sees the code the way that works best for them.Vaasta
@Kendall: Sounds nice. It's hard, though, because you have to be able to specify the exact formatting of every possible bit of code, including code that isn't legal in the language!Cowpoke
This is a pretty much standard opinion. Or, at least, it should be. If this is controversial, then there is a problem.Alva
1TBS and elastic tabs, or death. ps: @Kendall - but yes, sounds nice :)Lilialiliaceous
M
71

Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?

Motionless answered 2/1, 2009 at 13:14 Comment(7)
What's your view on whether the type of the variable should be explicit or not? (Thinking of "var" in C#.)Derr
Good one. If you have to work with legacy Fortran code, you wouldn't believe the headaches caused by this issue.Offside
I actually wanted to write this same opinion, as well. IMHO, this is the major drawback of both Python and Ruby, for no good reason at all. Perl at least offers use strict.Literatim
Explicit declaration is good, to avoid typos. Assigning types to variables is frequently premature optimization.Rosebay
Yup. ONE bug hunt involving an l (between k and m) becoming a 1 (between 0 and 2) wasted a lifetime of declaring variables.Bradwell
Anything else is not a real language. Now THAT'S controversial.Gagarin
I remember learning Visual Basic 6 in high school. If OPTION EXPLICIT was not the first line in each source file, we would fail.Advertise
M
68

Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.

Mindymine answered 2/1, 2009 at 13:14 Comment(15)
I released something week before last that I'd only tested in debug mode. Unfortunately, while it worked just fine in debug, with no complaints, it failed in release mode.Rosebay
The only thing I differ between Debug/Release builds is the default logging level. Anything else always comes back to bite you.Vallonia
ummm - what about asserts? Do you either not use them, or do you leave them in the release build?Diamagnetism
Again, I don't tend to use them. If you're asserting something in debug shouldn't you have it fail in release too? Use an exception if it's critical, or don't use an assert (or not care if the assert doesn't make it to release).Mindymine
@Cameron MacFarland - a good point; code with assertions in Debug mode either ends up not handling the failure condition in Release mode, or with a second failure-handling path which only works in Release mode.Floatplane
It would be like writing to different applications. you're debug version would be nicely debugged, and your release version wouldn't. Tragic!Eggshaped
@Daniel Paull, if there is something fishy it is often better to stop the processing than having corrupt data.Riancho
Agreed: Exceptions > Asserts.Mouldy
Agree: there are some very nasty bugs in there that could be real detrimental to your rep!Premer
Hmmm. So, release code almost never gets tested, right? No offence Cameron, but remind me never to use any of your softwareCandide
@MarkJ: That's what I'm saying, you should be testing the code that goes out the door, and not have a difference between "Release" that is not tested, and "Debug" that is tested, but never released.Mindymine
Asserts & exceptions have different purposes. Exception are for user errors -- things that "shouldn't happen". Asserts are for pre-conditions -- things that "CANNOT happen". Asserts bring the app to a crashing halt saying "You've got a big problem -- fix this now!!!"Washstand
@James: Exceptions also bring the app crashing down. Also what happens when a user sees an assert error? Are they supposed to fix it?Mindymine
All development and testing should be done on the release build, but a debug build should exist to assist in debugging. (Hello #ifdef!)Poniard
You just need to switch. Our QA uses debugging builds during development but switches to release towards the end. There are certain levels of sanity checking that you would like to be performed as much as possible before shipping, but cannot afford to ship due to performance reasons.Tabshey
P
64

Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.

Portiere answered 2/1, 2009 at 13:14 Comment(5)
+1. This a matter of ownership, we tend to care better for things we own than the things we don't. Want proof? Take a look at your company vehicles.Alternately
It also comes with the onus that people reporting bugs can report in sufficient detail so that it can be reproduced and tested to be proven fixed. It sucks to be so maligned when you reproduce a defect according to description, fix it, and find that the tester still has issues you didn't.Wage
I think testing and developing are different skills, they should be done by those who are good at them. Isolating testers from developers and making it hard for testers to get ther bugs fixed: no excuse.Merow
Sounds like bad developers to me. I'd file this under not all lazy developers are good developers.Kahn
+1 for controversy: I'm only going to test the things I think to test for, and if I design the particular method... I've already thought of everything that can go wrong (from my point of view). A good tester will see another point of view -> like your users.Hushaby
H
63

Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb

Henson answered 2/1, 2009 at 13:14 Comment(17)
Google does pagination, Google is very popular.Riancho
Good point. I would argue that google is narrowing down what users need based on real criteria -- the criteria is "ten best results." I'm not saying that showing less than the full results is always bad, if you give the user what they want.Henson
maybe you should give conrete example of a thing that's paginated but shouldn't. for example, how would you "narrow down" answers to this question?Warehouse
@bmb: Where does this put this thread? @tuinstoel: I claim that nobody ever (i.e. about 0.1% of all page views, probably much more for image search) use more than the first page of results. Pagination done right.Literatim
@Konrad Rudolph, Once of twice each year I search on my own name, I use all the page results (I'm not famous). That is probably the only time I use all the pages.Riancho
Sometimes it's easier for the user to read if all the controls are visible at the same time (no scroll bars). But in any case, you have to ask: Should I use paging or scrollbars? Either way it's still a click to the user.Jasperjaspers
@Riancho google does a lot of things but is not cooking fish. That google is doing pagination has no consequence in its popularity. Pagination is an antiquated model from books time. It will disappear soon in favor of ajax like refreshes, used by Google Reader for example.Dryly
I really, really hate the default 10 results from Google. I turn it up to 100 on every browser I use. I'd probably turn it to 1000 if there were an option (and it still was speedy)Teleview
You'll have much more trouble coming up with those query-based requirements than just implementing a simple pagination system. Sure, if you can suggest an alternative, go right ahead and reduce the number of items to return but not every problem will be as amenable.Goodrum
In the end pagination isn't really interesting. What's more important is the question: do you count all the search results and show the exact count or do you just provide an estimaton? Google shows only an estimation, showing only an approximation has great performance benefits. Ajax like refreshes don't change this.Riancho
"Who are you helping by giving back 20 at a time? The server? Is that more important than your user?" If only 1% of users actually need this feature, then the server and thus the other 99% of users.Cowslip
Ortzinator, I would agree with you if I thought the number was really 99%. But since my (controversial) contention is that pagination is "never" what the user wants, then I think helping the server helps no one. However, users who don't want all the results don't have to get them. Then everyone is happy.Henson
I came across this answer while paging through and searching every answer to this question to see if anyone had already posted about anonymous functions. Just sayin'Readable
So what about resultsets that have thousands or millions of results? What if it's only hundreds but each one shows a bunch of detail? Returning over 100K violates web best practices and such result sets could result in huge server loads.Thrifty
tsilb, then "allow the user to narrow down what they need based on real criteria". The point here is not that subsets are always bad, it's that pagination is not a method of subsetting that helps anyone. And huge server loads? Boo hoo. Did you build your app to make your server happy? Or your users?Henson
slashdot uses an approach where if you try to scroll below the last entry an extra set is added to the page. I love it!Eward
Thorbjørn Ravn Andersen, that helps a little, but it would still be tedious if you want to use your browser's "find" function.Henson
U
62

Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.

Upbuild answered 2/1, 2009 at 13:14 Comment(7)
Agree, but not very controversial?Woolgrower
it's controversial because the ugly mess that most people call MVC is mostly a 'do everything'Amaranthaceous
Really? I actually thought that MVC was the opposite to that.Thompson
This answer seems to stir up a bit of controversy on its controversial-ness. ;PToothless
I Agree RE: MVC - really hard to limit method bloat on the controllersMonoacid
Re MVC: If method bloat is the issue then make more controllers, they shouldn't be bloated with methods it doesn't feel right if that happens, feels like the controllers try to do more than they should.Chelseychelsie
If you don't think this is controversial, you probably don't know how far you can go with this. :-)Bibliography
A
60

Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.

Ahoufe answered 2/1, 2009 at 13:14 Comment(12)
TOTALLY agree! And I get very frustrated when I see concepts like this become so popular. +1Toulouselautrec
Invalid States lead to exceptions in my experience.Mindymine
@Cameron, are you saying that you should be able to initialize with a default constructor, then set each property, then setting checking for an invalid state in each setter and throwing an exception? If so, how can you possibly handle a situation where 2 properties need to be in synch to be valid?Toulouselautrec
That's why I hate ORM frameworks, despite the fact I need them all the time.Alva
I feel your pain Eduardo. I can't stand ORM frameworks, but sometimes they're the least-worst way to solve a particular problem. But yeah, I hate them too.Ahoufe
I dunno. If was uncontroversial, then all of the major frameworks for Java (notably, Spring and Hibernate) wouldn't require me to break the rule in order to use their code.Ahoufe
@John: If two properties should be in sync, they are obviously related and should be edited together in a method: SetBothProperties( a, b )Unfamiliar
Sadly serialization requires the existince of zero-arg constructors.Riancho
RAII - resource aquisition is instantation. FTWOphthalmologist
Sometimes it's sufficient to have protected zero arg constructors. That might help a little.Bibliography
This sort of reminds me of structs in Windows API programming. I could never figure out which fields I needed to set in order to have a valid instance of a struct like STARTUPINFO for example. Very frustrating.Windpipe
I had never heard anyone explicitly this before. It is brilliantly simple--I like it.Maldonado
P
60

Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!

Poll answered 2/1, 2009 at 13:14 Comment(1)
Architects that do code are worse than those that don't. i.e. their productivity is negative.Schizophrenia
M
60

Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.

Mindymine answered 2/1, 2009 at 13:14 Comment(14)
I've always local-mirrored the code. Then I would do the merging with Windiff and an emacs-macro, then lock it only long enough to check in the changes. I hated it when people would lock a file, then go on vacation.Offside
I used to think that it was impossible to work in a team without file locks in your SCM. But after working with Subversion in four companies (and rolling it out myself in two of them, I find merging (auto when possible, manual when not) much better 99% of the time.Dulci
Not controversial. Nobody used SourceSafe by choice.Aquamarine
@MusiGenesis: Yes they do. They exist.Mindymine
My company is still using SourceSafe. The main reasons are a) General inertia and b) The devs are scared of the idea of working without exclusive locks.Suit
My personal feeling is that the ability to merge code files should be a skill all programmers need, like all programmers need to know how to compile their code. It's part of what we do as a byproduct of using source control.Mindymine
@MusiGenesis: I've headed a move away from SourceSafe in two different companies over the last 5 years, and in both cases the reason for using SourceSafe was ignorance of the alternatives.Beet
SourceSafe doesn't even work on anything based on IIS7. So soon enough it's going to be pretty much redundant.Pharisaic
Just to be pedantic...while exclusive locks were the default until recently, SourceSafe has actually supported edit-merge-commit mode since 1998.Whitted
@Ed - SourceSafe can work with IIS7 if you have WebDAV installed. The WebDAV plugin didn't ship with Vista but it's available as a free plugin, and also comes with Win2008. That said, I hope as much as anyone that it finally fizzles out. There are far better tools on the market (free & otherwise).Whitted
@Richard: Yes but nobody who uses Source Unsafe uses it in Merge mode because they're afraid to, etc.Mindymine
MKS baby! Finally just killing it off now.Apology
I would never want to put my precious source in something notorious for corrupting files. Had to use it once due to a lack of alternatives, got burnt.Daystar
@Aquamarine we do at my work place, but I don't particularly enjoy it. I'm much happier with SVN.Bekha
L
58

All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.

Literatim answered 2/1, 2009 at 13:14 Comment(27)
Most functional languages are just like this; for example F# explicitly requires you to declare something as "mutable" if you want to be able to change it.Souter
Functional languages are just superior that way. Of the non-functional languages, Nemerle seems to be the only one offering this feature.Literatim
I like the bit in SICP where the authors dismiss 'looping constructs such as do, repeat, until, for, and while' as a language defect.Gwenni
Disagree but made me think. Interesting.Contraindicate
I personally like this. "Everything is immutable" makes multithreaded code a lot easier to write: locks are no longer needed since you never have to worry about another thread changing your object under your feet, so a whole class of errors related to race-conditions and deadlocking cease to exist.Horseshit
There's no such thing as a free lunch. Immutability despite its many benefits will have a cost. Generally I like the idea, in the same way I like the idea of functional programming. Can I get my head round that, no. Am I particular thick, may be, but I don't think so.Alternately
@AnthonyWJones: what costs does immutable-by-default have?Horseshit
This makes me wonder what my code would be like and how I would need to change my understanding of programming paradigms. Could I deal with immutable variables? I can't begin to grasp the extent of the repercussions of doing this in C#, but I can't imagine anything good coming of it.Vesica
The thing I don't like about immutability is the amount of copying required.Yolondayon
I though this was too much when I read it in Effective Java: Favor immutability. Then, when applied it make totally sense. Apps are MUCH easier to create and maintain using immutability. The only extra thing needed is a macro template to "code" the copy methods just as TraumaPony pointed out.Homopolar
Language constructs can't take care of all accumulator cases. Sometimes what you are adding up isn't a simple list. It also could make hairy logic in some cases as you can't have a default value.Bradwell
@TraumaPony: The nice thing about immutability is that in (almost?) all cases copying can be replaced by simple aliasing. This does require some changes in data structures, though.Literatim
Another case that can't be immutable: Any sort of iterative calculation or calculation within a loop. More generally, the data you are working on. How well would Microsoft Immutable Word sell??Bradwell
@Princess: immutable-by-default has a comprehension cost. It's much more difficult to think about (not reason about, think about) immutable-by-default objects/variables/what-have-you.Westlund
I agree that variables should be readonly whenever possible. It lets the compiler optimize and it lets the developer know the value never changes after a certain point.Eggshaped
@Loren: about your “other case”: how is that different from a special accumulator? It is actually just that, and well covered by many frameworks, such as LINQ. Notice that any kind of user interaction rarely benefits from immutability so Immutable Word is probably not a good idea.Literatim
@Jeff: I think this is at least debatable. Programming in general has a comprehension cost, any style of programming does. But I doubt that immutable-by-default incurs any additional comprehension cost at all, especially since it's much closer to the mathematical use of variables in equations.Literatim
@Loren Pectel, I think that databases should be immutable too.Riancho
There's an obvious cost in complexifying and slowing down the code, to a huge degree. This idea must have been thought of by those who don't have to do too much math programming.Unnecessary
@Lance, The opposite is true. Immutability actually helps the compiler a great deal in producing more efficient code because it can apply many more automated optimizations. This style of coding works perfectly with “math programming” (I guess you mean arithmetically dense code).Literatim
I want an immutable apple. When I take a bite of the apple I get your apple with the bite taken out of it, and can give my apple to the next person who wants a whole apple. It's all so simple!Wage
@Greg, Things always change, we developers are the orchestrators and conductors of this change, because we change and shape the future with our ideas and our code. That's the reason we want immutability!Riancho
Yes, and we'll only access read-only databases, stored on read-only media. Maybe once our programs have no mutable state, and therefore accomplish nothing we can move on to truly pure functional programming where nothing happens and the compiler with the best optimization outputs nothing.Mouldy
Might be little hard to animate anything if variables describing object to animate were immutable.Lafleur
@Kamil: no, not at all. In fact, Point objects in .NET are immutable, and animate just fine. You just need to create a new object for each animation position – which sounds inefficient but really isn’t necessarily.Literatim
Interestingly, in Java even loop variables can be final: for (final item : list) { ... } Took me a while to discover that.Bibliography
He's not saying that all variables should be final, he's saying all variables should be final by default. That's reasonable.Basset
M
58

Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.

Mindymine answered 2/1, 2009 at 13:14 Comment(15)
Yes. And code can only be tested if it has room to fail. Simple structures without inconsistent states have nothing to unit test.Offside
Yeah, unit tests up front don't really make sense. If I wrote it down, I thought about the possibility. If I thought about the possibility, unless I'm a complete moron it'll at least work the first time around where the test would apply. Testing needs to catch what I DIDN'T think about!Pressure
Phoenix - you have a point about only catching what you didn't think about but I disagree with your overall point. The value of the tests is that they form a spec. Later, when I make a "small change" - the tests tell me I'm still Ok.Teeming
I worked a company that wanted 95% test coverage, even for classes containing which contained fields to assign and no business logic whatsoever. The code produced at that company was horrible. My current company does not write any unit tests, relying instead on intense QA, and the code is top-notch.Horseshit
I write unit tests when I think I need them, but more importantly I write random test drivers, because my code might work fine in 100% of predictable cases. It's the unpredictable cases I'm worried about.Offside
In my current project, I've introduced up-front unit tests, and code quality has improved drastically. People had to be convinced at first, but soon noticed the positive effects themselves. So my experience says you're wrong. And PhoenixRedeemer, you ARE a complete moron... just like everyone else.Oops
@Brazzy: Why weren't your devs writing better code to start with? Notice my opinion says you don't "need" to write tests up front. I'm not saying you shouldn't, just that you should think about why you're writing that way.Mindymine
@brazzy: Hey, complete morons rule! :) I've seen code that is improved by unit tests, because it needed them. I've seen code that didn't need many unit tests, because it had few invalid states. My code tends to need randomly generated tests, due to the problem space.Offside
Unit tests are also about managing change. It's not the code that you are writing right now that needs the tests, but the code after the next iteration of change that will need it. How can you re-factor code if you have no way to prove that what it did before the change is still what it does after?Wage
@Greg: While it is true to say how can you refactor if you can't prove you didn't break stuff, but then I do write tests designed to show changes after a refactor. My opinion of tests is mainly confined to their use up front. Tests are very useful when refactoring.Mindymine
Everyone writes the unit test that checks open() fails if the file doesn't exit. No one writes the unit test for what happens if the username is 100characters on a tablet PC with a right-left language and a turkish keyboard.Declarative
I think this misses the point of test driven development, which hurts the argument. It isn't about testing edge cases, it is about driving design.Puppy
You don't need to catch every edge case. If you are testing the best case and a few common errors, when an edge case pops up you can write a test for it, fix it, AND ensure that all you don't introduce new bugs. Apart from that, writing tests first forces you to think about what you are trying to acheive, and how. It helps you write small maintainable methods. I don't see how any programmer with a desire to write good software could be against this.Strangulation
Although I agree that "unit tests only catch the issues I've thought about", there are many times where I'm positive the code I just wrote satisfies a particular condition, yet the test reveals something I totally overlooked. Furthermore, the act of writing tests first forces you to think about all the edge cases in a manner that you might not have to as great a degree.Assemblyman
For me, an eye-opener about testing was this: you need to try out your code anyway - so why not do it in form of a test? Extensive testing is controversial, of course, but a little can get you a long way.Bibliography
T
52

Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.

Toulouselautrec answered 2/1, 2009 at 13:14 Comment(0)
P
48

If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..

Powel answered 2/1, 2009 at 13:14 Comment(6)
Yes, apparently this is a very controversial viewPowel
BLASPHE---!! Um, I mean, yes, I quite concur.Roo
It does appear that writing a book on C# doesn't also mean you know everything about VB ;)Grosz
I think you might want to bring yourself up to date on the Jon Skeet facts. Remember: "Can Jon Skeet ask a question he cannot answer? Yes. And he can answer it too." He is omnipotent!Choiseul
At first I thought you said John Skeet isn't impotent.Junco
@Totophil: Interesting comment when you consider: Jon Skeet asked this question (and he posted an answer...)Washstand
A
46

"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man

Aracelyaraceous answered 2/1, 2009 at 13:14 Comment(8)
I think what you're trying to say is Swing sucks (as in JAVA UIs). Java back ends don't suck at all...unless that's the controversial bit ;)Stephen
You don't have to be a Java partisan to appreciate an application like JEdit. Java has some serious crushing deficiencies, but so does every other language. Those of Java are just easier to recognize.Unguiculate
I a C# fanboy, but I admire quite a few Java apps as being very well done.Crumpled
I think what you are trying to say is that the barrier for Java coding is so low that there are many sucky Java "programmers" out there writing complete crap.Penile
I agree that most Java desktop apps I've seen suck. But I wouldn't say the same of server apps.Fireside
You]re going to blame a programming language for 'horrible user interfaces'? Surely that is a fault of the UI designer. And while I'm sure Java has its share of poorly coded software that runs slowly and consumes too much memory, it is not at all hard to write Java programs that run efficiently and use memory only as needed. Having worked on a Java based web crawler capable of crawling 100s of millions of URIs I can attest to this.Daynadays
Java as a programming language is lacking a lot of features that you really want to make your life simpler. Java as a development platform rocks, as it got a great set of libraries and a nice community.Beccafico
Java does suck. Get to know .NET, Visual Studio, then you will never again want to code for Java.Cadmarr
D
45

Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.

Derr answered 2/1, 2009 at 13:14 Comment(26)
I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction. I'd call that returning to their C++ roots.Dukie
C# isn't really rooted in C++ though - it's rooted in Java, pretty strongly. IMO, of course :)Derr
That's not controversial - that's common sense :)Tzar
I realize that the link between C# and Java is certainly stronger than C++, but if we were drawing an inheritance diagram they'd both claim C++ as parent (arguably grandparent for C++).Dukie
+1 from me. I very rarely have to remove a sealed modifier (and I make everything sealed by default, unless it is immediately clear that it cannot be sealed).Samalla
My understanding is that you are saying we should be extra careful when designing object hierarchies, but I don't understand how sealing classes by default would help to achieve this.Thompson
"I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction" how is that logical?? i miss the connection. making methods nonvirtual by default in c++ is a Good Thing (imho) +1Dalhousie
I think the counter-argument could be generalized to: a class that derives from a base class without overriding anything further up in the hierarchy can be done relatively safely, etc.Embezzle
i think this is an anti-pattern. Classes without inheritance are just modules. Please don't pretend to know what all future programmers will need to do with your code.Viand
Inheritance and immutability don't go well together. If I want to know for sure that an object is immutable, I must know that it is not derived from, since a derived type can break that contract.Cowpoke
Given your reasoning, it's difficult to disagree. However - if I wished to use your class for a purpose which you didn't intend, but through some clever overriding/application of your base methods/properties it will suit my purpose, isn't that my prerogative rather than yours?Vesica
+1 from me too. Its about avoiding implicit assumptions - which always come back to bite you. An explicit statement is always more accurate.Vallonia
@balabaster: If you do that and then I want to make a change, it's very likely to break your code. As a code supplier, I don't want to put customers in the position of having fragile code. (Not that I'm actually a code supplier etc. This is in theory.)Derr
I agree that inheritance should be guided, but sealing all your classes by default doesn't guide you it's a road block, removing inheritance entirelyEggshaped
Even so, I should understand the risks in deriving from a non-frozen class. Any changes you make in an unsealed class carry the same penalty, so all you're doing by making everything default-sealed is making it harder to use your code in my own way.Westlund
Agreed in principle, although I hated the sealed-by-default behaviour of methods when I was using early C# (at Microsoft, actually) because sometimes I would want to intercept calls to some library class's method, but couldn't just subclass it because they didn't make the methods virtual.Dorison
If a inheriting class changes behavior of the method it is wrong. Period. It does not fulfill the substitutability principle. There is no need to make a class sealed, just shoot the offender.Dalury
One problem with having everything sealed is that it kills proper unit testing. Because methods in the .NET framework are sealed, it's almost impossible to test classes that use .NET framework classes like DirectoryEntry (which uses external resources), without writing a wrapper firstImes
I agree, and I would expand the scope to say that all programming language constructs should default to the "safest" or "no additional work required" state (not the opposite). Also, there should always be an optional keyword for the default whenever there is a keyword to specify a non-default.Timeserver
You can not mock sealed classes, except if they implement a certain interface which is used by all users of that class instead of the sealed class. (Bye Bye folks, I will descent into hell soon, as I dared to down vote Jon Skeet...)Downtime
I vastly prefer mocking of interfaces instead of classes anyway, so it's never been an issue for me.Derr
Why not get rid of defaults all together force the developer to make a decision if it's sealed or not, same should go for public vs private.Carpetbagger
@Josh: Yes, that's definitely an interesting idea. There are some options where I don't want to have to be explicit - e.g. "nonvolatile" would be silly. How about "writable" as the opposite of "readonly" for static and instance variables though? Hmm...Derr
I've found this to be very controversial in the circles I frequent. While I favor interfaces & aggregation over inheritance, I've seen some very creative and powerful techniques employed that rely on the availability of inheritance in the framework level libraries being used. My observation is that many people are reluctant to change code that they didn't write to begin with (particularly in IT organizations where it sometimes is actively discouraged).Lonnylonslesaunier
Strongest argument I've seen for classes NOT to be sealed by default is that it would adversely impact the ecology of software libraries (commercial and internal). Too few people take the time to consider how their classes can be inherited - it's hard to get this right. Most will stick with the language default. Software changes relatively slowly (even when you have the code) and there will be a lag in getting inheritability changed. Finally, will people really spend more time designing for inheritance? Or just blindly add "overrideable" when the find a case where they decide they need it?Lonnylonslesaunier
@LBushkin: The fact that people don't take time to consider things properly (and that it's hard to get it right) is exactly why the default ought to be the safe option. Give people the shotgun unloaded, and make them load it themselves if they want to.Derr
C
43

Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.

Cosmic answered 2/1, 2009 at 13:14 Comment(1)
Yeah, that's why it's controversial :)Culbreth
L
42

A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.

Ladonna answered 2/1, 2009 at 13:14 Comment(5)
Real clever programmers are those that find the good answer while making it maintainable. Either that or those who hide their names from comments so users won't backfire asking for changes.Dalury
Real genius is seing how really complex things can be solved in a really simple way. People who write needlesly complex code are just assholes who want to feel superior to the world around them.Premer
+1 Good programmers know their own limitations - if it's so clever you can only just understand it when you're writing it, well, it's probably wrong now, and you'll never understand it in 6 months time when it needs changing.Candide
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --unknownNeuritis
Robert, great quote: BTW it's from Brian Kernighan not "unknown"Candide
D
40

Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}
Domingodominguez answered 2/1, 2009 at 13:14 Comment(7)
I wouldn't apply this as a rule, but I definitely don't hesitate to take this route when it can reduce complexity and improve readability. +1 Why do you need peter so badly, though?Til
Not a fan of 'canvern code' are we? :) I have to agree however. I've actually worked on 'cavern code' that more that an ENTIRE PAGE of just closing braces.... And that was on a 1920x1600 monitor (or whatever the exact res is).Gib
You should check out "Spartan programming" - this seems like a similar style.Selenodont
It is not indentation you are arguing against, its deeply nested conditional and loop blocks. I fully concur in that regard. I've found that enforcing a code style with a maximum line length tends to discourage this behavior somewhat.Daynadays
Don't forget braces for "if"! use foreach! use (condition ? valueIfTrue : valueIfFalse) If you don't understand, search engine, learn!Oaxaca
I don't like the continue here.Bradwell
This is a dupe of the higher-ranked answer stackoverflow.com/questions/406760/…Assemblyman
A
40

If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.

Alternately answered 2/1, 2009 at 13:14 Comment(9)
Excellent point. I re-learn this point the hard way every time I try to teach my parents (in their early 70s) how to use something on the computer or their cell phones.Aquamarine
I disagree. I don't think they are mutually exclusive. To take the opposite, people who have never used a computer before are the best interface designers.Liaison
I disagree, but only in the sense that most interface design decisions seem to be made by management.Alterant
I'd say they're definitely not mutually exclusive. I would more likely say that management should never decide where to put the button. I've had some of the most complicated interfaces ever created that way.Nabala
I wish I could upvote this twice. Yes, it's not universally true, but programmers tend to have the completely wrong mindset to design UI. We are too forgiving of interface flaws when it gives power and flexibility that end users don't need.Neuritis
That's one of my favorite books. Should be a must read - particularly for programmers who think they are web designers...Eurypterid
This is like saying "If you know anything about how a car works, you should not be allowed to design the interior." There is an entire discipline around UI design and if you are doing things just based on your mental model of some imaginary elderly user, then you are not doing it correctly. No one can account for everyone's mental model. Applying extensive research, best practices, statistical analysis, and user testing are the ways to get to your desired result. Programmers can learn this discipline too.Blackmore
@Ben: no you can't account for "everyones" mental model but its a sure thing that the developers mental model is entirely different from everyone else. Thats why an Interaction design professional will invent a person that best represents the typical user. If a system has users of very different persona (e.g., in addition to Doris we may invent Jeff the IT admin guy) then good interaction design will use Jeff as the target audience for the tasks he is likely to engage in.Alternately
Interaction Design by users is what gave MySpace its reputation for vomit-inducing pages.Goodrum
C
38

I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.

Corona answered 2/1, 2009 at 13:14 Comment(9)
A well configured editor can help you here. Most editors can show invisibles and vim for one can highlight those invisible mistakes in red to make them really obvious.Lilithe
I think that the bad idea becomes more obvious when you think about the ridiculous limitation of lambda in Python.Bozo
The number of times I've had a python script fail because I put a blank line in my code in a for loop, and the blank line didn't have enough spaces... Makes me want to not space my code with blank lines.Mindymine
I don't agree with you, but +1 because it is controversialWarehouse
It was also true of the original Unix make command. Actions had to be one tab space in; if you used spaces instead, an action looked like a syntax error. Ugh!Wilkie
History repeats itself. We didn't learn from Fortran output formatting or from make files so why be surprised that someone thought it was a good idea for python? It won't be the last time.Goodrum
@mcrute: if you have to build a special-purpose tool just to work with the language, that sounds like a problem to me.Joanejoanie
"About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students." So how is this a problem?Mitchellmitchem
@Paul Nathan: if you have to build a special-purpose tool to write well-indented code with a braces language, that sounds like a problem to me.Devastate
S
38

Before January 1st 1970, true and false were the other way around...

Shavian answered 2/1, 2009 at 13:14 Comment(3)
Oh man, this is the funniest thing I've seen on SO in a long time.Aquamarine
I understand how *nix systems record time, and how true and false are represented. But, could someone explain this joke to me, I don't get it? Thanks.Geosphere
it's like particles and anti-particles: for an arbitrary system (like a computer) it doesn't actually matter what label you ascribe to each value, the two things are defined by each other. Kaons spoil the metaphor a bit, but it's just a joke so you'll have to learn to let it go.Shavian
H
37

Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.

Horseshit answered 2/1, 2009 at 13:14 Comment(12)
An interesting link to confirm your point of view: sadekdrobi.com/2008/12/22/…Pistoia
Nemanja: Fascinating find, too bad I can't upvote comments :)Horseshit
I would rather have "non-nullable reference types" (with compiler checking) than completely remove null.Derr
I have to agree with Jon; "null" is frequently a valid state and indicates something completely different from zero or empty. Eliminating it would be a mistake IMO; but for those cases where it's not appropriate, a non-nullable object type would be nice.Roo
Correction: a non-nullable reference.Roo
I disagree, but then I use Objective-C where nil is quite a handy concept.Floatplane
This is like prohibiting zero to prevent divide-by-zero errors. Nulls happen in real-world situations and forbidding them would force everyone to hand roll their own ad hoc implementations.Hwang
I really like Scala's approach to this: there is no null, and if you want the same effect you have to wrap it in an Option[T] object (either Some[T] or None) which forces you to notice and check it. No more accidental nulls.Skyward
I don't necessarily agree that they should be removed, but I do think the Null Object Pattern should be preferred over checking for null every four lines in your code.Coptic
Princess, if you like Nemanja's link you can edit your answer and include itCandide
Agree with Jon. It should be possible to have the language enforce that a given variable can never be assigned null.Eward
The problem is your strongly typed language, not null. In a language where null is a valid value and calling any method on null returns null is great.Unleavened
K
37

You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.

Katey answered 2/1, 2009 at 13:14 Comment(6)
+1 sorta. I use my tablet when I can like pen and paper because sometimes its just easier to write than use a piece of software.Triviality
Do you mean "everything" and not "anything"?Leonardo
Well, he sayd "you don't have to program" and I completely agree - nobody has forced me to program, I just happen to like it. Sorry, but no controversy here.Constipate
No, no, I have to program lots of things.Mouldy
You don't have to program everythingIlse
+1 for low-tech. Sometimes an Excel spreadsheet will do the trick just fine instead of coding an expensive CRUD.Allethrin
S
35

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

Snell answered 2/1, 2009 at 13:14 Comment(0)
B
35

Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.

Bunni answered 2/1, 2009 at 13:14 Comment(7)
+1 because I disagree so strongly. Singletons (the design pattern) make testing such a nightmare they should never be used. Note that singletons (an object only instantiated once) are fine, but they should be passed in through dependency injection.Basset
A logger is certainly not a perfect candidate for a singleton. You may want to have two loggers. I've been in that exact situation before. It may be a good candidate for being global, but certainly not for being forced into "one instance only". Very few things require that constraint.Ludovico
The way I figure it, I've used some singletons in one project, and I might well do so again before I retire. Not the most widely useable patterns, but valuable for some things.Rosebay
I really recommend reading misko.hevery.com/2008/08/25/root-cause-of-singletons to you.Horseflesh
I would like to add that in C++, the singleton pattern is extremely important due to the static initialization fiasco.Advertise
Logging is the only common use of the singleton pattern, all others uses are mostly bad.Olivo
I have never found a case of singleton that could not be substituted for a static, besides in languages that do not have a proper static inicialization time, bringing static fiasco.Elspet
D
34

A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.

Dunkin answered 2/1, 2009 at 13:14 Comment(3)
I don't agree that a picture is not worth a thousand words. I do agree with the sentiment in the answer. Perhaps it would be better to ask "Would you use a 1000 words when only a few (or even one) would do?". Using an image instead of well choosen text is may effectively be just that.Alternately
Some words are worth thousands pictures. (What about sounds, music, odours, etc?)Oaxaca
Yes but a 32,000 byte bitmap IS one thousand words. At least until you move to a 64-bit CPU.Goodrum
E
33

Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."

Erme answered 2/1, 2009 at 13:14 Comment(0)
D
33

There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.

Demicanton answered 2/1, 2009 at 13:14 Comment(1)
Those who can't do, teach. By that logic, the people who can't program are the ones teaching us how to program. I've experienced it myself where the professors I've had have admitted to being unable to do the problems and exercises they assign. Protip: Take the classes with the teachers contracted by the university, not tenure (or tenure-pathed) professors.Bekha
C
33

You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.

Castano answered 2/1, 2009 at 13:14 Comment(4)
If your language claims to be OO but has built-in types that are syntactically and semantically different from objects, and you think this is just fine, you may be a Java or C++ programmer.Postfix
@Barry! What about us Objective-C programmers! That might be us too!Vaasta
C++ is multiparadigm, and as such it can decide to use whatever types it wants :PDalury
Object orientation is a means to a goal and not a goal in and of itself.Premer
C
32

It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo

Coach answered 2/1, 2009 at 13:14 Comment(1)
That sounds very much like my way of thinking: optimise the architecture/design, not the implementation.Derr
M
31

Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?

Marks answered 2/1, 2009 at 13:14 Comment(8)
Stored proc code like T-SQL or PL/SQL is not stored in files.Riancho
The main problem is that a picture is worth 1000 vague words while you can be very specific in text. But I agree that we really need a "birds eye development" mode where you can hack together a rough outline and let the IDE fill 99% of the gaps with defaults.Calctufa
I believe smalltalk does this. Yet strangely it's still not a widely used language.Planetesimal
+1 for really cool idea. -1 not terribly controversial. I like the idea of perhaps seeing method declarations in a 3d space with calls to other methods shown using lines / color / something. Perhaps it would be a mess, perhaps it would make an overall code overview easier to grasp? Dunno how much of this smalltalk does as suggested above.Larios
YES! I really wish programmers get over the cult of linear plaintext soon. Modern IDEs do take steps in the right direction, but it's not enough - they are still just annotating and working on the plaintext, bending it already almost to the breaking point. Instead of hackarounds, we should be shifting the paradigm into expressing application design in forms that are much more suitable for it!Phenanthrene
YES! I miss Visual Age for Java every day I get into Eclipse. VAJ had no source files, just some kind of binary repository :SDalmatia
Boo! I currently work on a project that is doing all of its development in one of your magic IDEs. The problem with abstracting details is that they have to be defined somewhere. The more obscure (according to your IDE designer), the harder the detail is to find. And if the detail happens to be causing a bug or compiler error, you get to hunt through the IDE for where that detail is. It may be possible to have one of those magic IDEs some day, but not without the Mozart of the HCI field and not without enormous vendor lock-in.Bruise
The stuff you're talking about is called Model Driven Development. I don't know what the right alternative is, but I suspect it is a blending of scripts, OO, and low-level code in one program, using the right language for each job.Bruise
L
31

C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language

Ludovico answered 2/1, 2009 at 13:14 Comment(3)
true dat. My main beef is that every 3rd party library has its own string class. I waste too much time converting between CString to std::string to WxString to char*. Can't everyone just use std::string or const char*.Idioplasm
Not true "C++ has plenty of strengths that no other language can match. It's a good language." EVERY language has strengths that no longer language can match (even LOLCODE, hey it's a lot of fun).Coquillage
Perhaps. But C++'s strengths are a bit more commonly useful. Let me know when your language of choice supports compile-time metaprogramming or RAII.Ludovico
M
31

If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.

Mackinnon answered 2/1, 2009 at 13:14 Comment(6)
I agree clear communication is important. But grammar is secondary. Some people have poor grammar but can communicate clearly (I'm thinking of some non-native English speakers) and some people have perfect grammar but can hardly communicate at all.Junco
Ironically, there are many developers that think this is beneath them. Comments and documentation that looks like it's written by a retard should somehow convey that they are truly great hackers.Premer
This isn't just about grammar and spelling either. It is possible to write something that has correct grammar and spelling yet is nearly impossible for others to understand (just as you can write a program that compiles and runs yet is impossible to understand the code). Being able to express yourself clearly in writing is very important. Having taught a comp-sci course that involves writing design documents for the last six years I've found it distressing how few of my students seem to possess this ability. And it seems to be getting worse each year.Daynadays
@John D Cook Poor grammar is most often detrimental for communication. These rules weren't invented for no reason (goes to check if there are no grammar mistaeks in those comment).Cromlech
"If a developer cannot write a clear, concise and grammatical comment s..." Deliberate irony?Wolver
Never ceases to amaze me how often I get to paste this: english.stackexchange.comAstronautics
H
30

Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.

Hally answered 2/1, 2009 at 13:14 Comment(1)
I agree with your general feeling. Try and see them as temporal observations. See blog.plover.com/prog/design-patterns.html for example.Henriettehenriha
O
30

One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.

Outspoken answered 2/1, 2009 at 13:14 Comment(4)
It's an interesting idea but I think it depends what kind of program you're writing. Five worlds man. joelonsoftware.com/articles/FiveWorlds.htmlCandide
I couldn't agree more (granted I'm a DBA so all we deal with is data).Helminthic
The system also seems to lose it's way if the data is out.Outspoken
I'd take the relations of the data into account, too, so "The model is the system". I mean the second letter of a name is relatively useless without the rest and the first name needs the family name and the employee the department, etc.Calctufa
V
28

Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.

Vogeley answered 2/1, 2009 at 13:14 Comment(4)
There are two types of optimisation, by architecture and by code. Architecural optimisation is clearly needed before you write code. However the term 'premature optimization' really applies to efforts to write code optimally instead of simply. This is evil.Alternately
I am often called in to straighten out big messes that were architected ostensibly with the objective of "performance".Offside
@Mike: There has to be some understanding of volumes and response requirements before the app is developed. Such things have to be considered in the inital archecture. Of course specific performance choices need to be justified.Alternately
@Mike, as I mentioned, it's all to do with context. I work in the geospatial domain, where the default complexity of many problems is O(n^3). In this arena, optimization is a must, and it has to happen at design time. Analysing underperforming code with a profiler is rarely helpful.Hyphenated
D
26

I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.

Diamagnetism answered 2/1, 2009 at 13:14 Comment(10)
I don't know, I've seen an upfront design be a very good guide to development. I've never seen it work out such that the upfront design is followed exactly. It seems in my experience that when the rubber hits the road, designs have to be reworked.Idioplasm
fine with that, so you iterate... amend the design now that you have discovered something new and get on with the job. Your code is, once again, an expression of your design. It's developers that think that a design follows code the urk me.Diamagnetism
I wish I was allowed to design before I code. In this job it's "I have an idea" from somoene followed by a directive to get something in a demo ASAP.Gorton
Much of my design is noted in header files and/or a few diagrams on a white board. I'm not saying anything about the how formal your design should be, or how to do it, but for the love of God, get your thoughts sorted before coding!Diamagnetism
I've been irritated by the opposite, too much value placed in the design. The mantra "reuse the design not the code" forgets the time spent on implementing, testing, debugging and generally hardening the codebase. You cannot just throw that amount of work out.Balloon
@Daniel: I think I agree with you. At the same time, it's important to be ready and able to revise the design and the code late and often. That takes skill that, I'm afraid, is not taught.Offside
@Mike - I'm not saying that we all return to a waterfall model. Quite the opposite - as a developer you should expect things to change, so design your system to cater for change (eg, minimise coupling) and expect unexpected iterations that affect your design. You are right - this is not taught.Diamagnetism
So if you have to iterate anyway, the choice to design first or code first is essentially the same thing.Vaasta
@Kendall: you are kidding, right? Perhaps you are thinking of a proof by induction for your statement, but I'd hope that the number of iterations to write a bit of code that is closed against change is small. In that case, I believe that design first is far more efficient.Diamagnetism
I believe in iterative design. If you invest too much time upfront in design, you won't have the time to do the necessary rewrite (which always happens).Cromlech
W
25

1-based arrays should always be used instead of 0-based arrays. 0-based arrays are unnatural, unnecessary, and error prone.

When I count apples or employees or widgets I start at one, not zero. I teach my kids the same thing. There is no such thing as a 0th apple or 0th employee or 0th widget. Using 1 as the base for an array is much more intuitive and less error-prone. Forget about plus-one-minus-one-hell (as we used to call it). 0-based arrays are an unnatural construct invented by the computer science - they do not reflect reality and computer programs should reflect reality as much as possible.

Woodhead answered 2/1, 2009 at 13:14 Comment(12)
Actually, 0-based arrays are based in the reality of pointer addressing, which stems from how memory is laid out.Joanejoanie
Can you tell me which is the first minute of the hour, please? I always forget...Derr
@Paul: Agreed! And it's completely abstract and has nothing to do with counting. @Jon: The first minute is one, when we get to one we have counted off the first minute. Just like your first birthday celebrates your first year of life. There is no 0th anything.Woodhead
+1 @Jack, this is the perfect sort of controversial programming opinion. As much as my inner programmer hates to admit it, you've actually got a point. It even enticed Jon Skeet to enter the controversy.Mackler
I completely disagree with this opinion, so I'm upvoting it.Sergeant
It's offset vs. index, fencepost vs. fence-segment. Posts work well for open-end ranges and segments work well for closed-end ranges.Halter
Jon skeet sleeps with a pillow under his gunToh
0-based arrays are (at least for me) very natural, and indeed, natural numbers begin with 0. +1 to this, is veeeeery controversial.Kolinsky
Who says you have to use element 0 if it's not appropriate for the domain? Are you that hard up for memory that you can't waste one element?Hillie
@Jeanne: If you're not using the 0th element, effectively that's one-based :).Woodhead
I interpreted your post as saying compilers should default to using one-based arrays.Hillie
+1 I often have trouble in real life situations because I'm so used to start counting at 0 o.o.Pen
L
25

Emacs is better

Leopardi answered 2/1, 2009 at 13:14 Comment(4)
Actually, either vi or vim is better.Rosebay
Only for those who have stuff in their .emacs file (which they understand).Eward
as a vim user, I have to +1 this one.Paction
What's funny is in the past year and a half, I've come to love vi more.Leopardi
S
25

The users aren't idiots -- you are.

So many times I've heard developers say "so-and-so is an idiot" and my response is typically "he may be an idiot but you allowed him to be one."

Surcingle answered 2/1, 2009 at 13:14 Comment(2)
I say: If someone does something stupid, I'm missing an important fact.Calctufa
Even though i'm guilty of this sometimes, I have to agree.Dielectric
E
25

Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.

Edmundedmunda answered 2/1, 2009 at 13:14 Comment(8)
If they're refactoring where appropriate, I probably wouldn't call them cowboys...Derr
To me a cowboy is someone who just jumps into a problem and recklessly writes code, rather than thinking about, estimating, and designing something first. They do it without any regard to a process or accountability other than "it better get done, as fast as possible".Edmundedmunda
You! You're the idiot who came up with the legacy system that 5 years later I'm hired to deal with. I've spent most of my life working on 5+ year old code that because cowboys worked on it has ossified into an inflexible mess that is too brittle to be modified or added to.Mindymine
Cameron: I think you need a new profession. Sounds like your job sucks. :)Edmundedmunda
No my current job doesn't suck, but that's because I'm not working on a creaking legacy system. I suppose it's unfair to only blame the cowboys for those systems, as they started ok, and then 5+ years of patches got applied. Now I ask how old the code is in interviews.Mindymine
I'd like the cowboys to think a little, but not so much they need to write a supporting design document first or anything like that. I agree that often designers get stuck in the "what about this scenario" syndrome.Mindymine
@Cameron: Yes, it's unfair to blame only the cowboys. Blame their managers.Terricolous
Wel call them "Ninja programmers" because there's nothing they can't do. (Just like ninjas)Millennial
C
24

The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.

Cnidoblast answered 2/1, 2009 at 13:14 Comment(8)
It sounds like you're seeing process being used to compensate for poor programmers, not to enhance great developers. This is why the Agile Manifest says "Individuals and interactions over processes and tools". Instead of adding process for poor programmers, add it when # of programmers grows.Cowpoke
@jay not quite. I think that process even put around the best developers causes a decrease in code quality. I would liken it to meeting a famous painter, and then telling him the rules he needs to abide by to make a good painting. It might make sense to you, but it's ridiculous.Stephen
I suspect great painters have their own processes.Tocci
Process takes away energy that makes code better - that applies to coders good and bad. Some process is useful but process breeds process and you always end up with too much.Vaasta
I couldn't agree with you more! The arguments I've gotten into with other programmers over their strict adherence to processes could fill a book the size of War And Peace. That includes both "good" and bad processes, though.Numbersnumbfish
I've seen the opposite effect. I worked a company which used an Agile methodology, and the code quality was nightmarishly bad, beyond awful. I now work at a company with a very rigid process, lots of red tape around undocumented changes, and the resulting code is top notch.Horseshit
One size does not fit all. Small project, small team in one location, experienced developers, domain expert on site, software not absolutely critical? (some software, if you have a bug someone might die.) Then yes, just run wild. If not, you need more process.Candide
If your processes make things harder, you're doing it wrong. It should be like a aircraft takeoff checklist, helps you remember to do stuff in the right order. Automate things: you're a software developer dammit. Make the easy thing the right thing.Formate
T
23

To produce great software, you need domain specialists as much as good developers.

Terricolous answered 2/1, 2009 at 13:14 Comment(3)
This is as controversial as a cup of coffee.Leggat
@Andrew from NZSG Like many of the sentences posed here, it has been "controversial" during my past work experience, because more usually than not software projects have been developed without keeping that in mind. If something happens most of the times and I disagree with it, I qualify my own opinion as somewhat "controversial", even though it is obviously that I am right.Terricolous
@Andrew: I once phoned a company about a Java developer job ad, long time ago. They asked me, "Do you know Java?" Yes. "Could you write a book-keeping application?" Err, by myself? No. With a financial advisor next to me, yes. "I see. Thank you for your interest." WTF?Valuator
D
23

Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.

Discoloration answered 2/1, 2009 at 13:14 Comment(2)
When I use Doxygen, I use /internal tags very often. This makes it easy to generate two sets of documentation exactly as you describe. (Of course, I also continue to use regular comments throughout code where required.)Hexylresorcinol
I don't just like JavaDoc. I love it.Premer
F
23

Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.

Fortson answered 2/1, 2009 at 13:14 Comment(4)
I disagree, how many StringUtils classes you have in your project? I once found project that had 5 of them. Most of that stuff could be replaced by third part lib.Miscegenation
Frameworks, yes. Useless overhead, many times. Third party components, no! Portions of the task already completed, tested and debugged by thousands of other people!Dunkin
@Dunkin -- I can't help it. I really am a roll your own type of guy at heart. I will fully admit that I might be jaded as places I've worked at the past tried to buy the absolute cheapest things possible. It bit us in the end.Woolgathering
Joel in defence of not-invented here syndrome: joelonsoftware.com/articles/fog0000000007.htmlCandide
B
22

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

Biltong answered 2/1, 2009 at 13:14 Comment(1)
This should be a rule in any position that has anything to do with using a computer. Not just restricted to programmers.Brann
O
22

My most controversial programming opinion is that finding performance problems is not about measuring, it is about capturing.

If you're hunting for elephants in a room (as opposed to mice) do you need to know how big they are? NO! All you have to do is look. Their very bigness is what makes them easy to find! It isn't necessary to measure them first.

The idea of measurement has been common wisdom at least since the paper on gprof (Susan L. Graham, et al 1982)*, when all along, right under our noses, has been a very simple and direct way to find code worth optimizing.

As a small example, here's how it works. Suppose you take 5 random-time samples of the call stack, and you happen to see a particular instruction on 3 out of 5 samples. What does that tell you?

.............   .............   .............   .............   .............
.............   .............   .............   .............   .............
Foo: call Bar   .............   .............   Foo: call Bar   .............
.............   Foo: call Bar   .............   .............   .............
.............   .............   .............   Foo: call Bar   .............
.............   .............   .............   .............   .............
                .............                                   .............

It tells you the program is spending 60% of its time doing work requested by that instruction. Removing it removes that 60%:

...\...../...   ...\...../...   .............   ...\...../...   .............
....\.../....   ....\.../....   .............   ....\.../....   .............
Foo: \a/l Bar   .....\./.....   .............   Foo: \a/l Bar   .............
......X......   Foo: cXll Bar   .............   ......X......   .............
...../.\.....   ...../.\.....   .............   Foo: /a\l Bar   .............
..../...\....   ..../...\....   .............   ..../...\....   .............
   /     \      .../.....\...                      /     \      .............

Roughly.

If you can remove the instruction (or invoke it a lot less), that's a 2.5x speedup, approximately. (Notice - recursion is irrelevant - if the elephant's pregnant, it's not any smaller.) Then you can repeat the process, until you truly approach an optimum.

  • This did not require accuracy of measurement, function timing, call counting, graphs, hundreds of samples, any of that typical profiling stuff.

Some people use this whenever they have a performance problem, and don't understand what's the big deal.

Most people have never heard of it, and when they do hear of it, think it is just an inferior mode of sampling. But it is very different, because it pinpoints problems by giving cost of call sites (as well as terminal instructions), as a percent of wall-clock time. Most profilers (not all), whether they use sampling or instrumentation, do not do that. Instead they give a variety of summary measurements that are, at best, clues to the possible location of problems. Here is a more extensive summary of the differences.

*In fact that paper claimed that the purpose of gprof was to "help the user evaluate alternative implementations of abstractions". It did not claim to help the user locate the code needing an alternative implementation, at a finer level then functions.


My second most controversial opinion is this, or it might be if it weren't so hard to understand.

Offside answered 2/1, 2009 at 13:14 Comment(13)
I can add one more type of reaction: "This is a great technique, but why not use one of the tools that automates it?"Manifold
@Crash: Happy Halloween! You're right, that is another reaction I get, and of course the answer is: "You could if they exist". I don't want much: 1) take and retain stackshots, 2) rank statements (not functions) by inclusive time (i.e. % of samples containing them), 3) let you pick representative stackshots and study them.Offside
... I built one ages ago, to run under DOS. It didn't do (3) but it had a "butterfly view" between statements (not functions). The real value was that it would focus my attention on costly call sites, and then I would take manual samples until one of those showed up under the debugger, and then I could really look to see what was going on, because just knowing the location was not enough.Offside
... as a recent example, this C# app takes it time about starting up. Half a dozen stackshots show about half the time is spent looking up strings in a resource and converting them to string objects, so they can be internationalized. What the stack sample by itself doesn't show is how often the string is something you would never want to internationalize, which in this case is most of the time. Just finding a slow function, or looking at numbers after a run, would never reveal that.Offside
@Crash: Actually there's a tool called RotateRight/Zoom that is close to doing it how I think is right. It takes and retains stackshots. You can manually control when it samples. It has a butterfly view that can work at the statement level. It gives you total time as a percent, which is the fraction of samples containing the line.Offside
People with a low boredom threshold might press Ctrl+C after one second, which may not be a representative sample of the program as a whole.Opuscule
@Andrew-Grimm: The problem, when removed, will save some %. Pick a %. 20%, 50%, 90%, 10%? Whatever it is, that is (at least) the probability that each ^C will see it. One way is take 20 samples - 20 * x%/100 will show it. Another way is, just take samples until something appears more than once. It's a big one, guaranteed.Offside
... one sample is not enough unless you know there is a big (high percentage) problem. In the limit, if you know there is an infinite loop, it only takes one sample to see it. In general, you don't know, so take multiple samples.Offside
If all you're interested in is "is there enough space in this room" then you definitely need to know how big the elephants are. Measuring and capturing go well together - you don't need to commit yourself to only using one technique.Derr
@Jon: That's just a metaphor I'm using to try to get the idea across that if something's taking too long, stackshots can find the problem with precision of location, but not necessarily precision of time measurement. I've seen one profiler that does this (Zoom), but I haven't seen them all. Mainly I'm zealot-ing for an orthogonal way of thinking about performance tuning - to expect big speedup factors, which are typically mid-stack lines of code doing stuff you didn't realize.Offside
@Jon ... and there's a central phenomenon that I never hear discussed on SO (magnification), and it's the route to big speedups. If there's a series of problems accounting for 50%, 25%, 12.5%, 6.25% of time, each time you fix the biggest one, the rest get twice as big (thus easier to find). If any one of these along the way is not something your profiler can pinpoint, you're stuck, not getting the full speedup.Offside
@Mike: Absolutely. Most profilers I've used have shown figures as "percentage of time spent in method" mind you - with raw figures as well, but they tend not to be as useful. But yes, it's certainly possible to find big speed-ups. I recently found some in Noda Time :)Derr
@Jon: Right. What I like about Zoom is it gives you % time (wall-clock) at the line-of-code level, it ignores recursion (yay!), and it has a butterfly view, although it's a function-level not line-level butterfly. But still, those things are cute & helpful, but when I've got serious tuning to do, the fact that you can see all the variables, and you can read the why off of individual stack samples, is what, for me, makes all the difference for the manual method. Cheers.Offside
R
22

The customer is not always right.

In most cases that I deal with, the customer is the product owner, aka "the business". All too often, developers just code and do not try to provide a vested stake in the product. There is too much of a misconception that the IT Department is a "company within a company", which is a load of utter garbage.

I feel my role is that of helping the business express their ideas - with the mutual understanding that I take an interest in understanding the business so that I can provide the best experience possible. And that route implies that there will be times that the product owner asks for something that he/she feels is the next revolution in computing leaving someone to either agree with that fact, or explain the more likely reason of why no one does something a certain way. It is mutually beneficial, because the product owner understands the thought that goes into the product, and the development team understands that they do more than sling code.

This has actually started to lead us down the path of increased productivity. How? Since the communication has improved due to disagreements on both sides of the table, it is more likely that we come together earlier in the process and come to a mutually beneficial solution to the product definition.

Redfield answered 2/1, 2009 at 13:14 Comment(2)
this is one of those answers I would like to give +100 toFlabbergast
sigh.. so true! can I buy you a beer?Ectoblast
C
22

If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?

Corduroys answered 2/1, 2009 at 13:14 Comment(6)
IMHO, if you need so badly code completion, it's a code smell, or even a design smell : it indicates that the design has grown too complicated, too interdependent, too tightly coupled to other module's responsibilities. It's a bit controversial too: refactor it until it fits into your brain !Hervey
Code completion slows typing. Even set to zero delay, there's the tiniest pause while you wait for code completion. I agree that if you need code completion on your own code, that may well be a sign something needs simplification. But libraries are so large now, I think it helps more than hurts.Vaasta
@vincent: Do you never use massive libraries (.NET Framework / Windows API etc)?Colorimeter
I'm using Django, and RoR before. Both encourage cohesion and small files. At the same time I'm helping out a beginner with VB.net, and I have to say VS is impressive, and it certainly influences the code style itself ; but code completion has to be a double-edged sword. ( BTW, I HATE eclipse )Hervey
VS has really fast completion @Kendall: it doesn't impede my typing. Half the time I write Con.Wr[Down]( for Console.WriteLine(. That's 10 keystrokes less. @vincent, I agree, Eclipse needs to improve their code completion.Coquillage
I work with only one other developer on a project with 240k lines of code and almost a thousand files. We couldn't live without code completion.Kathernkatheryn
R
21

A Good Programmer Hates Coding

Similar to "A Good Programmer is a Lazy Programmer" and "Less Code is Better." But by following this philosophy, I have managed to write applications which might otherwise use several times as much code (and take several times as much development time). In short: think before you code. Most of the parts of my own programs which end up causing problems later were parts that I actually enjoyed coding, and thus had too much code, and thus were poorly written. Just like this paragraph.

A Good Programmer is a Designer

I've found that programming uses the same concepts as design (as in, the same design concepts used in art). I'm not sure most other programmers find the same thing to be true; maybe it is a right brain/left brain thing. Too many programs out there are ugly, from their code to their command line user interface to their graphical user interface, and it is clear that the designers of these programs were not, in fact, designers.

Although correlation may not, in this case, imply causation, I've noticed that as I've become better at design, I've become better at coding. The same process of making things fit and feel right can and should be used in both places. If code doesn't feel right, it will cause problems because either a) it is not right, or b) you'll assume it works in a way that "feels right" later, and it will then again be not right.

Art and code are not on opposite ends of the spectrum; code can be used in art, and can itself be a form of art.

Disclaimer: Not all of my code is pretty or "right," unfortunately.

Rafter answered 2/1, 2009 at 13:14 Comment(2)
Definitely agree! Making beautiful applications requires beautiful code.Finnougric
Only just seen this: agreed 100%. Ugly code is far more likely to be buggy. An appreciation of elegance and beauty is essential to good coding.Holbrook
D
21

Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper
Deva answered 2/1, 2009 at 13:14 Comment(10)
Your "explaining why" rationale is also subject to change if the API you are working with, for example, gets updated or improved.Unguiculate
In my small example I'm trying to show why I already did what I did. Like there's a better way to grab data, but this is the only way right now. Kind of like a note to refactor or why something happened. Also it's mainly related to my own code and not an external dependency.Deva
Icky. Don't declare a variable if you're only going to use it once. Your suggestion is not much better than, "int i,this_is_a_counter;". If you're forced to add extra code to get rid of comments, you've made things MORE complicated!Glidebomb
I have to agree with Brian, nothing worst then having a bunch of one time use variables.Liaison
I'm sick of reading this crap. The reality is that the large majority of code out there is badly written, let alone reasonably refactored. If you can't write decent (understandable) code at least have the decency of adding comments.Premer
Why are one-time variables bad? They explain what you do, they don't cost anything (if you have a half decent compiler), and you can easily use them again for the same thing. Without the firstTimeOnPage, I would be very likely to put in the if (data == null) condition somewhere else as well.Colorimeter
-1: Comments are good. Comments are a cornerstone of code. I'd rather spend 10 seconds reading a one-line comment than spend two hours trying to figure out what some really complex code does.Thrifty
You might spend 10 seconds reading a one-line comment and then 3 hours finding out that the comment is outdated and led you down the wrong path. A well named variable or method is preferable, then I know what your intentions were and know that it hasn't changed. Also easily refactorable.Deva
@brian, one time variables can give names to faceless expressions, which is nice, especially in long parameter lists.Eward
@rball: I agree and disagree, depending on how declarative or domain-specific the language is. You have a functional spec somewhere, if only in your head. If the language is declarative enough to directly encode the functional spec, then there's no need for comments. Usually, that is not the case, so IMO the purpose of comments is to express the mapping between implementation and functional spec, to the extent that the code itself is not able to. That way, when the spec changes, as it always does, you know what code to change.Offside
H
21

It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.

Heinie answered 2/1, 2009 at 13:14 Comment(2)
I agree, with the caveat (and I'm turning and looking in the direction of several teams in Redmond, Washington) that Mort is often unfairly scoped and not always well understood.Ulster
I'm with you Wayne, though to stay in the industry, I think we all need to go Elvis and Einstein at times. And we need to put in effort outside of work too. I rested on my laurels for a while (got married, moved, had other stuff going on) and I can see tech moving beyond me and now I have to play catch up. Tech is moving too fast for extra effort not to be put in. I'm learning and doing side projects again, and I'm having fun. But I do resent the 14 hour a day folks. They will blossom, whither, and then fade. Balance is the key, but the day of being exclusively a Mort are numbered.Jud
S
20

Boolean variables should be used only for Boolean logic. In all other cases, use enumerations.


Boolean variables are used to store data that can only take on two possible values. The problems that arise from using them are frequently overlooked:

  • Programmers often cannot correctly identify when some piece of data should only have two possible values
  • The people who instruct programmers what to do, such as program managers or whomever writes the specs that programmers follow, often cannot correctly identify this either
  • Even when a piece of data is correctly identified as having only two possible states, that guarantee may not hold in the future.

In these cases, using Boolean variables leads to confusing code that can often be prevented by using enumerations.

Example

Say a programmer is writing software for a car dealership that sells only cars and trucks. The programmer develops a thorough model of the business requirements for his software. Knowing that the only types of vehicles sold are cars and trucks, he correctly identifies that he can use a boolean variable inside a Vehicle class to indicate whether the vehicle is a car or a truck.

class Vehicle {
 bool isTruck;
 ...
}

The software is written so when isTruck is true a vehicle is a truck, and when isTruck is false the vehicle is a car. This is a simple check performed many times throughout the code.

Everything works without trouble, until one day when the car dealership buys another dealership that sells motorcycles as well. The programmer has to update the software so that it works correctly considering the dealership's business has changed. It now needs to identify whether a vehicle is a car, truck, or motorcycle, three possible states.

How should the programmer implement this? isTruck is a boolean variable, so it can hold only two states. He could change it from a boolean to some other type that allows many states, but this would break existing logic and possibly not be backwards compatible. The simplest solution from the programmer's point of view is to add a new variable to represent whether the vehicle is a motorcycle.

class Vehicle {
 bool isTruck;
 bool isMotorcycle;
 ...
}

The code is changed so that when isTruck is true a vehicle is a truck, when isMotorcycle is true a vehicle is a motorcycle, and when they're both false a vehicle is a car.

Problems

There are two big problems with this solution:

  • The programmer wants to express the type of the vehicle, which is one idea, but the solution uses two variables to do so. Someone unfamiliar with the code will have a harder time understanding the semantics of these variables than if the programmer had used just one variable that specifies the type entirely.
  • Solving this motorcycle problem by adding a new boolean doesn't make it any easier for the programmer to deal with such situations that happen in the future. If the dealership starts selling buses, the programmer will have to repeat all these steps over again by adding yet another boolean.

It's not the developer's fault that the business requirements of his software changed, requiring him to revise existing code. But using boolean variables in the first place made his code less flexible and harder to modify to satisfy unknown future requirements (less "future-proof"). When he implemented the changes in the quickest way, the code became harder to read. Using a boolean variable was ultimately a premature optimization.

Solution

Using an enumeration in the first place would have prevented these problems.

enum EVehicleType { Truck, Car }

class Vehicle {
 EVehicleType type;
 ...
}

To accommodate motorcycles in this case, all the programmer has to do is add Motorcycle to EVehicleType, and add new logic to handle the motorcycle cases. No new variables need to be added. Existing logic shouldn't be disrupted. And someone who's unfamiliar with the code can easily understand how the type of the vehicle is stored.

Cliff Notes

Don't use a type that can only ever store two different states unless you're absolutely certain two states will always be enough. Use an enumeration if there are any possible conditions in which more than two states will be required in the future, even if a boolean would satisfy existing requirements.

Sensualist answered 2/1, 2009 at 13:14 Comment(6)
I guess this is not very controversial.Saadi
The argument isn't controversial per se, but try writing your code like that and see if your team object. I'd bet 9/10 teams would try and argue you back to booleans.Gorton
Of course, OOP guys in the corner would mutter something along the lines of "class Truck extends/implements Vehicle, class Car extends/implements Vehicle..."Vindicate
I worked on a project that used a collection of booleans to try to distinguish among models of printer. It was ... execrable. Nobody would want to do that after having seen it in action. But here's some controversy for you: In languages which allow it, it's perfectly reasonable to use a bool for one of three values: true, false, and don't know.Hemicellulose
Thanks. Never thought about that. I guess I should give enums a better look.Rumney
Just to make things clear to me. Should I use _Bool isGirl; or enum { boy, girl }; typedef unsigned gender;?Nesmith
C
20

My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.

Commensal answered 2/1, 2009 at 13:14 Comment(11)
It sounds like you're mixing programming methodologies and language design philosophies, while also recognizing the damage of zealotry. As a result, your potentially interesting thoughts are cluttered and unclear.Cowpoke
The "Truly XYZ" idiom is usually a case of the "No True Scotsman" fallacy. As far as the rest, have you read xahlee.org/Periodic_dosage_dir/t2/oop.html? Also, this seems very similar to a perlmonks post, have you written on this before?Unguiculate
a Language is user interface that can make a programming methodology easier. An OOP language, therefore, is a language designed to make OOP programming easier, making them closely related subjects. This position was argued better by Apocalisp, elsewhere in this question.Commensal
I've never hear anyone pontificate on the phrase "truly object oriented" in the past 10 years I've been programming. Never. Not even once. Are you actually quoting some obnoxious manager?Horseshit
Anyone who started with java, or C++, and then tried lua, or javascript, or some other language that doesn't have some arbirary java feature. Anyone entrenched in the Java world who has a self superior view that singletons are a terrible idea. Anyone who's read teh GoF book and thought it was futureCommensal
Almost, IMHO. I think OOP is the ideal way to deal with some aspects of programming, but it's not what it's made out to be: It's not a replacement for every methodology and/or piece of code you ever come across; It's not immune from being taken too far; It's not your master; It's not irreplaceable.Vocal
Do you come from a VB6 background and never embraced OOP?Szechwan
Incorrect. There's nothing wrong with OOP, it's just a strategy. What the problem is, is the attitude that I should have "embraced" it, or the only alternative is I'm some backwards beginner. It is not the end all be all, it is not a religion, and I don't have to be crucified in order to expunge me from the pool of programmers so that all "right" thinking programmers can live free of sin. I posted my answer to this question because it is the most controversial opinion I have. That was the question.Commensal
the reason it's the worst thing to happen to programming is that it prevents programmers from looking at other solutions that may actually be better suited to the problem, and it prevents us from looking ot or accepting new paradigms that might be better suited to most problems.Commensal
I hate when newcomers lecture me about the greatness of OOP when I program in OO languages from mid '80s. They are totally blind to OOP shortcomings, they don't know that "OOP" is an ill-defined concept and, worst of all, they ignore a whole world of options w.r.t. programming paradigms.Cnidoblast
+1 Wish I could upvote more. This field is rife with bandwagons, gurus, "right thinking", and occasionally good ideas made into religions. To a mechanical/electrical engineer (like me) this is so weird. I assume if something is true there's a scientific reason why. I also assume inventiveness is a good thing. Very little of that in this field.Offside
W
20

I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)

Wideopen answered 2/1, 2009 at 13:14 Comment(11)
Could you highlight your controversial opinion... is it "viewstate is bad" or something else?Woolgrower
Nope, it's "ViewState is enabled by default, when I really don't think it should be, but having it disabled by default required custom controls"Wideopen
I expect anyone who has worked on ASP.NET would agree with this. We have a page to search a third party system that has some LARGE drop down lists on it. The ViewState doubled the already 200Kb page size.Ornate
I don't think that experienced webforms developers will find this particularly controversial...most of us will agree with you!Teeming
Yup, we encounter the page size doubling from time to time, and sometimes even more. The page renders slower, more bandwidth is used, and it's a nightmare to track down problems when you're viewing the rendered page source.Wideopen
The intersting thing about this is that in the majority of cases ViewState is not needed at all!Supranatural
Don't throw so much crap on a page if Viewstate is really a problem. You probably have a design problem if you really have that much viewstate stuff on a page.Piperpiperaceous
Have you tried programming without ViewState? I can promise you that 5 minutes with JSP will make you run back to ViewState. Seriously, the ViewState is NEVER the problem, the problem is the developer using the ViewState!Nothing
@Paul, I insanely agree! Don't throw so much crap in your page if you're having ViewState problems - go back to design!Nothing
Try ASP.NET MVC, it's a joy to program with.Alterant
You do not have to turn ViewState off for each and every control. You can do it in the @Page directive.Ess
H
19

You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb

Henson answered 2/1, 2009 at 13:14 Comment(9)
True, but Sqlite is very portable too. I'm not gonna start with flat files if there is a change it should be moved to Sqlite.Riancho
There are other benefits of a DB. Shared access across a network for a client/server program. Easy access and manipulation of data (although technologies like LINQ help with that).Mindymine
There are thousands of benefits of a database and reasons why we need them most of the time. But not always.Henson
having a database from the start is easier than first having proper separation between data storage and biznis logic with flat files so that you can switch to a database later :)Warehouse
Are you saying it's easier to do it wrong with a database than it is to do it right without one?Henson
I am 100% convinced that developers over use databases. The crutch that kills.Warlord
@Stu Thompson, I'm not. At work I'm refactoring an application so that it stores its data in a database instead of xml files. It is a lot of work and I hope it is the last time that I have to do this.Riancho
tuinstoel, don't blame XML files for a missing or poorly designed data access layer.Henson
@bmb, Even refactoring 'just' a data access layer can be a lot of work. And it is totally unnecessary work.Riancho
M
19

C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#

Maley answered 2/1, 2009 at 13:14 Comment(10)
so everybody should suffer, because you have suffered? its always nice to learn useless things, but come on.Miscegenation
Not really, I really loved C++ back in the day, I was in denial when I heard from a prof that it's the worst language he's ever seen.Warehouse
+1: Everyone should learn C first because programming isn't for everyone and it isn't for anyone that can't grasp C.Reveal
Blast them with raw machine code. Suffer!!! The assembler course was the most fun in had (during class time) in university.Coquillage
Mythology. Before encountering C I learned the assembly of 2/3 CPUs and familiarized with others. Some CPUs are a pleasure to program because of their orthogonal instruction sets, others are a pain but less idiosyncratic than C. C fails for its intended use, i.e. a portable assembly.Cnidoblast
.. and I find pathetic the elitism that too many programmers show.Cnidoblast
My university taught programming almost exclusively in Java. I felt simultaneously aroused and cheated when I finally got around to learning C and C++.Vicegerent
I disagree. Its hard to get first-timers excited about memory allocation.. Start with a language where you can get near instant gratification. The web languages are good for this.Finnougric
@Matt: you're not supposed to agree ;)Warehouse
I did a lot of teaching introductory CS. What I found was most useful was first a few weeks on a decimal machine simulator, to set up the basic mental framework of addresses, memory, instructions, and stepwise execution. Then we did Basic (sorry), then Pascal. I like C (and C++) but those are hell to teach to newbies, because there are too many subtle ways for students to get confused, like the difference between pointers and array referencing, and nested types. It's not acceptable to say "sink or swim" - they pay tuition.Offside
A
17

All source code and comments should be written in English

Writing source code and/or comments in languages other than English makes it less reusable and more difficult to debug if you don't understand the language they are written in.

Same goes for SQL tables, views, and columns, especially when abbrevations are used. If they aren't abbreviated, I might be able to translate the table/column name on-line, but if they're abbreviated all I can do is SELECT and try to decipher the results.

Aguiar answered 2/1, 2009 at 13:14 Comment(8)
If English is the main language of wherever you work, I guess. Otherwise, that's just stupid. This suggestion seems pointless imo.Chapeau
Especially when you code ABAP in SAP-Systems it's always funny to read some German comments, that nobody out of German speaking regions will ever understand. (I'm a native German speaker so it's double funny)Astronomy
All comments in English is great - if you speak English, and the maintainers will as well. I am a native English speaker, but ocassionally plop other languages in just because I can. If I were coding for an app that would be used and eventually maintained in, say, France - I'd expect the comments to be in FrenchSubastral
Using multiple languages in code makes it harder to read as you have to switch between the two languages in your head. English only (with native terms if needed in parenthesis).Eward
That's not controversial, is simply idiotic when you know that a piece of code will never leave a non English-speaking country. I know perfectly that my English sucks and I don't want to inflict it on my fellow countryman programmers. Of course, if I'm quoting documentation in English I don't translate it.Cnidoblast
This only makes sense for open source application where you expect (or hope) to get a number of people from all over the place to help. Otherwise just use what ever language suits you best.Warehouse
You guys may not intend for your code to leave your country, but none of us can see the future. Our ERP system is written half in Dutch and half in English because a Dutch company bought an American company and rolled two different products into one. How can I be expected to know what gbkmut means?Aguiar
If you work for a customer and that customer has a set of terms he is using in his trade, then your code will be using them. If that customer is not using English, you will end up translating them to and from English. This will probably end up causing bugs and misunderstandings. Yes it sucks adding Norwegian (in my case) domain names in the code and it makes your head spin for a while, but at least you are on the same page as your customer.Beccafico
G
17

Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.

Gwenni answered 2/1, 2009 at 13:14 Comment(6)
"has never been tested" You do pre-release testing with assertions on and accept the assertion being triggered as passing the test? Weird idea. If you do that than I agree with you but I don't understand why you are doing this.Gerge
No, I'm just assuming that a failed assertion during testing causes a build to be rejected. Therefore, if one happens in the wild, the program has necessarily entered a state outside of test coverage.Gwenni
If during testing your assertions never failed and it does fail during production code, there is a problem with testing, but nevertheless, the error should be logged and the applications should end. Assertions or code that warrants the same should be in production. I agree.Dalury
The problem is when the action of doing the assertion costs something that would otherwise slow down your code. If it is not in a hot path, I totally agree, the asserts should always be on.Tabshey
++ I've followed this path, in the spirit of "hope for the best - plan for the worst". We test to the very best of our ability, but never assume we have found every possible problem. Assert (or throwing an exception) is a way of guarding against doing further damage if a problem occurs (heaven forbid).Offside
It depends. Software that controls pacemakers or nuclear power stations should not be written like that.Candide
G
16

Newer languages, and managed code do not make a bad programmer better.

Gib answered 2/1, 2009 at 13:14 Comment(1)
Agree. New running shoes won't make the average runner run any faster.Premer
R
16

The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.

Riancho answered 2/1, 2009 at 13:14 Comment(3)
I think this is an evil opinion by an evil man out to do evil.Premer
In other words: 'evil' is evil.Terricolous
"People who use it have too little imagination." ..and are evil. :)Isallobar
P
16

Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.

Piperpiperaceous answered 2/1, 2009 at 13:14 Comment(1)
If you're writing abstraction using spaghetti code, then you're doing something very, very, wrong.Khoury
H
15

I generally hold pretty controversial, strong and loud opinions, so here's just a couple of them:

"Because we're a Microsoft outfit/partner/specialist" is never a valid argument.

The company I'm working in now identifies itself, first and foremost, as a Microsoft specialist. So the aforementioned argument gets thrown around quite a bit, and I've yet to see a context where it's valid.

I can't see why it's a reason to promote Microsoft's technology and products in every applicable corner, overriding customer and employee satisfaction, and general pragmatics.

This just a cornerstone of my deep hatred towards politics in software business.

MOSS (Microsoft Office Sharepoint Server) is a piece of shit.

Kinda echoes the first opinion, but I honestly think MOSS, as it is, should be shot out of the market. It costs gazillions to license and set up, pukes on web standards and makes developers generally pretty unhappy. I have yet to see a MOSS project that has an overall positive outcome.

Yet time after time, a customer approaches us and asks for a MOSS solution.

Haworth answered 2/1, 2009 at 13:14 Comment(3)
MOSS = Microsoft Office SharePoint Server ?Riancho
As someone who occasionally has to program for Sharepoint, I will state that you second opinion is not controversial at all.Crowther
I agree 250% with everything. Keep speaking your mind. Lots of people see things this way!Schaeffer
U
15

The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.

Unlimited answered 2/1, 2009 at 13:14 Comment(2)
It reminds me of a quote (I can't remember who said it though) - "Measuring a program by lines of code is like measuring a plane by weight."Whilst
@Cristián: It was Bill Gates who said that.Hollis
C
15

Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.

Cowpoke answered 2/1, 2009 at 13:14 Comment(17)
You must have a really large screen then. Do you also think, that class can have no more than 3 or 4 methods, because no more clearly fits on the 41 lines that fit on my screen. Voting up, because this is really controversial.Constipate
Rene: thanks for disagreeing with me without dismissing my answer out of hand. I sense an open mind.Cowpoke
I have to disagree as well. I write a lot of Python classes and not many of them fit on my screen. Of course, I'm not counting my netbook's screen because that would just be unfair to me. =PNumbersnumbfish
Screen size varies widely depending on your visual acuity. I keep my screens running at 1680×1250, and use Consolas 8pt. What I can see on one screen is likely much more than a guy running at 640×480 using Courier New 10pt.Roo
Make that, "Screen capacity varies widely depending on your visual acuity and display settings." :-) Not enough coffee yet. :-)Roo
@Mike: it's true, screen capacity varies. To follow my guideline, you have to decide which screen you want to fit on. On a team, you have to make that decision together. Still, the principle is sound: I want to be able to look at a whole class and comprehend it in its entirety, without scrolling.Cowpoke
This might be quite challenging to implement in some languages that are more verbose (require more plumbing), but I admire the general sentiment.Timeserver
@Rob: thanks, and you're right. In some languages you can Extract Class and get some compactness, hopefully for the benefit of your code. In others (C++ I'm looking at you!) even simple classes have to work very hard to function.Cowpoke
Do you have any other rules to go with this? The list of classes in an API should fit on one screen? What is it in the class that you need to see anyway, surely the name tells you all about what it can do! What need for to look at the methods on a list.Wage
Some other rules that may fit: "Methods should have one statement" and "blocks should have only one statement" and "switch cases much be trivial" and "each 'enum' type should be mentioned in a conditional only once". :-)Cowpoke
Ouch. It can be hard enough to make a method fit on the screen, never mind an entire class (my main language is Java BTW)Schizophrenia
For some of my classes, I can barely fit the member list on the screen. If an obect is to represent something, it should do so in its entirety. Breaking it up into many smaller classes is just adding visual complexity (right click > go to definition - ad nauseum) where it need not exist.Hushaby
@SnOrfus: I bet that there are bits of self-contained, general-purpose, reusable bits of functionality in those big classes, that would make COMPLETE SENSE as a new class. You wouldn't be confused when looking at a reference to one, because the name and its functionality would be obvious.Cowpoke
I think this is baiting. The implication is that a class should have a limit to the number of attributes it can have because their declaration eats into the space for method bodies. This sounds like a language troll as in, any language that can't fit a class onto one screen isn't fit to use. Try coding something complex like the contact details for a person which includes an international address including phone numbers, email, fax, etc.Goodrum
r u talking abt classes in c++ where function body is declared outside the class? then may be u r right...Tintinnabulation
@Tintinnabulation No, that's not what I'm talking about. It's not possible to do this in C++ because the language is too complex and unwieldy. Also, I wish you would write Englis.Cowpoke
Not if you're programming for a mobile phone.Terricolous
G
15

I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.

Guardi answered 2/1, 2009 at 13:14 Comment(19)
Agreed. Getters and setters violate encapsulation just as much as exposing objects directly does. There is no real point to them (except maybe in an external interface).Castano
There's actually a good reason to use setters: You can do some checking on constraints before assigning the new value to your variable. Even if your current code doesn't require it, it will be much easier to add such checks when there's a setter.Soren
I was very glad there was a setter on a variable once when I had to make sure some processing was done when it changed.Rosebay
Actually, I think Ruby has something that gets you both - it's called virtual attributes. It allows you to have checks on your assignments and still be able to access the data as if it were a public member.Whilst
Python lets you do that as well.Numbersnumbfish
Setters allow you to add contention in multithreading environments. Just lock when you set. Of course, it is not always the case that your code will end up being accessed by multiple threads, or is it?Dalury
But this being 2009, who's still using an IDE that does not create the getters and setters on the press of a key...?Garton
It's not just that I have to write the code, but the getters and setters obfuscates the code itself by, in 95% of the time of my applications, taking up space and just being plane ugly.Guardi
I guess C# gives you a easy way to have both, is this Java?Deva
I had / have this opinion in some cases, but, one VERY important fact for me is that you can't 'override' a public variable. If the class in question is final, sealed, whatever - cool... AND if you're basically saying extenders should never be able to do anything on set / get ... ever ...Ulster
In many languages you can change a public field to a property without requiring any changes to code that consumes it. You would, however, force a recompile (in non-interpreted languages at least), which adds some constraints if you're shipping opaque libraries to external customers.Whitted
And you set a breakpoint on a public field how, exactly? Setters are brilliant for exactly this reason - you can easily see what code is influencing a value.Fir
You must use getters and setters when you code to an interface!Eward
1. Use an editor that shortens the process 2. Using setters and getters are much more safe than directly accessing the variable: what if you write a class with a variable inside: counter, and incorporate it into code (maybe in 100 classes) and now suddenly decide that counter cannot be negative ? using setter can help solve problems like these... 3. Sometimes exposing variables can be dangerous; eg: Exposing TOS in a stack classNanosecond
@Richard Berg In VB6 you could change a public field to a property and vice versa without requiring any changes to code that consumes it, not even a recompile. It's one of the few areas where VB6 was IMHO better than .NetCandide
@Thorbjørn -- not necessarily. Just because the designers of C#/Java decided to disallow fields in interfaces doesn't make it an inherently bad idea. Direct access is the dominant idiom in languages as diverse as C and Ruby.Whitted
@Fir -- set a data breakpoint. Your CPU has hardware interrupts for this exact purpose. Getting it to work in a managed language is a little challenging, but not any harder than the problems inherent to soft-mode debugging generally.Whitted
@Richard Berg: I don't get you - direct access is a dominant idiom for C, but definitely not for Ruby - actually, without reflection, there is no way in Ruby to do direct access. What Ruby does is give you an extremely easy way (attr_accessor :x) to generate getters/setters for an attribute which are syntactically transparent; i.e. you'd still use p.x and p.x = 3 instead of p.getX() and p.setX(3), but they're still methods. "Direct" instance variable would be @x, and you can't use a dot notation with it (i.e. p.@x is ungrammatical).Valuator
I agree. I think your point is valid as long as we are not talking about exposed interfaces (where obviously you want to provide as much encapsulation as possible). For internal code I prefer to use setters/getters when there is actual checking of constraints before setting/getting anything. I don't like setters/getters that do nothing because you have to browse the code to actually see if they do something...Coady
M
14

A Developer should never test their own software

Development and testing are two diametrically opposed disciplines. Development is all about construction, and testing is all about demolition. Effective testing requires a specific mindset and approach where you are trying to uncover developer mistakes, find holes in their assumptions, and flaws in their logic. Most people, myself included, are simply unable to place themselves and their own code under such scrutiny and still remain objective.

Maggy answered 2/1, 2009 at 13:14 Comment(3)
Do you include unit testing in that? Do you not see any value in unit testing? If so, I don't agree. I would agree that a developer shouldn't be the only tester of their software (where possible, of course).Derr
Jon, I am talking from the point of view that yes they SHOULD do unit testing but no they should NOT be the only tester of their code. As you rightly point out, if they are the only one then they don't have much choice. This question did ask for your most controversial opinion so I think that mine is right up there. The other key point is that the "we don't need no stinking testers" cause' the dev's or anyone can just do it is completely wrong as wellMaggy
I suggest you reword the rule to "should never be RESPONSIBLE for testing their own software", as your current wording could imply you were not allowed to test your pgorams at all.Eward
D
13

Object Oriented Programming is overused

Sometimes the best answer is the simple answer.

Designed answered 2/1, 2009 at 13:14 Comment(1)
For most competent worldly-wise OO devs, classes are only broken out from a root class once it becomes apparent that complexity is becoming hard to manage. Oddly (or not so oddly), it is often at that very point that it becomes apparent just what needs to be broken out. And until you do break out from a root class, you are programming procedurally (at least within the context of that class). Premature proliferation of classes during the development process is something that OO greenhorns do.Klein
E
13

If you want to write good software then step away from your computer

Go and hang out with the end users and the people who want and need the software. Only from them will you understand what your software needs to accomplish and how it needs to do that.

  • Ask them what the love & hate about the existing processes.
  • Ask them about the future of their processes, where it is headed.
  • Hang out and see what they use now and figure out their usage patterns. You need to meet and match their usage expectations. See what else they use a lot, particularly if they like it and can use it efficiently. Match that.

The end user doesn't give a rat's how elegant your code is or what language it's in. If it works for them and they like using it, you win. If it doesn't make their lives easier and better - they hate it, you lose.

Walk a mile in their shoes - then go write your code.

Epicurean answered 2/1, 2009 at 13:14 Comment(3)
Great answer - I always try to do this... but sometimes you got to protect users from their own ideas. Because e.g. in business software (financial) I always encounter some users with the tendency to wish for "creative bookkeeping". Hehe.Astronomy
This is why I love being a domain expert. For my whole career I've worked alongside people who use the type of software I write.Hillie
@Jeanne: Ditto - my major project is based on what I do for a living. I do a lot of talking to myself.Epicurean
S
13

Programming is in its infancy.

Even though programming languages and methodologies have been evolving very quickly for years now, we still have a long way to go. The signs are clear:

  1. Language Documentation is spread haphazardly across the internet (stackoverflow is helping here).

  2. Languages cannot evolve syntactically without breaking prior versions.

  3. Debugging is still often done with printf.

  4. Language libraries or other forms of large scale code reuse are still pretty rare.

Clearly all of these are improving, but it would be nice if we all could agree that this is the beginning and not the end=).

Slily answered 2/1, 2009 at 13:14 Comment(3)
I have upvoted it although I believe this is completely uncontroversial to anyone who knows a minimum about programming methodology and history. We've got a long road ahead, hence the many insulting jokes about programmers’ abilities compared to architects, airplane pilots etc.Literatim
Actual there are many who would say the opposite. Everything interesting to do with programming languages was done in 60s with Lisp. We are just waiting for people to figure this out - Witness the growing popularity of Python/Java closures, etc. So this is controversial.Tabshey
printf debugging is actually mentioned on a higher-rated comment in this thread as being a good ideaBeersheba
B
13

I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.

Berserk answered 2/1, 2009 at 13:14 Comment(5)
I wish I could vote to make you God. Really, this is amazing stuff.Coquillage
On the other hand the best OSS packages are huge force multipliers. These are the well-designed, well-maintained ones that have big communities of users and developers (and real published books). Some examples of these are Rhino (Javascript interpreter), Xerces (XML Parser), Restlet (REST Web Services), and jQuery (Javascript GUI development). Others really do suck, like Axis 1.x.Wilkie
Screen readers and other accessibility tools perform better if the HTML conforms to standards. As for OSS .. your reasoning is deeply flawed in applying your own negative experience to all OSS works. Sure modifying OSS projects can be difficult (impossible for many) but I've lost count of the OSS libraries I've used to save myself tons of work on various projects. If most OSS is useless it is only because there is so much of it. There is a lot of very useful OSS out there.Daynadays
Everything WWW sucks anyway, so for the first point I cannot care less. +100 for the second.Cnidoblast
long live the sudo apt-get installWarehouse
C
13

Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.

Cicatrize answered 2/1, 2009 at 13:14 Comment(1)
And therefore the bozo bit must be flipped for some :-DHodgkinson
E
13

New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.

Embosom answered 2/1, 2009 at 13:14 Comment(4)
Did you mean "New web projects should not consider" ?Unguiculate
That doesn't sound very controversial to me.Schizophrenia
WOW! Some people in this thread really have extremist views! ;-)Premer
This is absolutely not controversial. Perhaps you want to say New web projects should not consider using JavaTotalitarianism
F
13

Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".

Fodder answered 2/1, 2009 at 13:14 Comment(8)
Yes! His ideas about Heiarchical data structures are academically elegant and totally useless.Drought
Well, I like Celko but I agree with you re: surrogate primary keys!Teeming
Agree in part, surrogate keys are definitely more convenient when accessing data, but I try to identify a natural key as well and usually set it up as a constraint. So why not both?!Tropic
I have no problems with natural keys to be used for convenience, but primary keys should be immutable. I once had a system that used SSN's as PK's, and sometimes persons wouldn't have one (as children) and then they would. Try to change a PK, what a mess...Mast
I can agree with the concept that once your autonumber keys get mismatched, there's no way to fix them. But the solution isn't "natural" keys; the solution is never to expose the keys to your users.Geronto
I wish I could go back a few years on my current project and tell myself not to use a natural key. Now we're stuck with it and kludging around it. +1Skyward
@ocdecio: Fabian Pascal gives (in chapter 3 of his book, as cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint. So you actually agree with him, but think otherwise. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".Cnidoblast
One of the classic mistakes is to assume that just because a candidate natural key, such as SSN, is by definition unique, that you will receive unique values. People may lie or make mistakes and you then have a chance of collision when the "real person" comes along.Stay
V
12

Software is like toilet paper. The less you spend on it, the bigger of a pain in the ass it is.

That is to say, outsourcing is rarely a good idea.

I've always figured this to be true, but I never really knew the extent of it until recently. I have been "maintaining" (read: "fixing") some off-shored code recently, and it is a huge mess. It is easily costing our company more than the difference had it been developed in-house.

People outside your business will inherently know less about your business model, and therefore will not do as good a job programming any system that works within your business. Also, they know they won't have to support it, so there's no incentive to do anything other than half-ass it.

Vicegerent answered 2/1, 2009 at 13:14 Comment(3)
@ iandisme - Probably you didn't spare some time to tell those guys what your business is? Another point, why did you sign such a contract where they just develop some sh** and flee? You should have done a long term contract with dev, maintainance and support clubbed together. As a customer controlling quality was in your hand.Delossantos
@ Seventh Element - Don't blame India because somebody else didn't manage his offshoring project and quality properly.Delossantos
@Delossantos - I didn't have anything to do with setting up the contract. Either way, your point adds to my original statement: Doing it right the first time would have been more expensive up-front, but worth it.Vicegerent
M
12

Greater-than operators (>, >=) should be deprecated

I tried coding with a preference for less-than over greater-than for awhile and it stuck! I don't want to go back, and indeed I feel that everyone should do it my way in this case.

Consider common mathematical 'range' notation: 0 <= i < 10

That's easy to approximate in code now and you get used to seeing the idiom where the variable is repeated in the middle joined by &&:

if (0 <= i && i < 10)
    return true;
else
    return false;

Once you get used to that pattern, you'll never look at silliness like

if ( ! (i < 0 || i >= 9))
    return true;

the same way again.

Long sequences of relations become a bit easier to work with because the operands tend towards nondecreasing order.

Furthermore, a preference for operator< is enshrined in the C++ standards. In some cases operator= is defined in terms of it! (as !(a<b || b<a))

Minica answered 2/1, 2009 at 13:14 Comment(7)
Ick, no. If I want code to throw an exception when a string is over a certain length (for example) I'd far rather use if (text.Length > 30) { throw new ... } than if (!(text.Length <= 30)) { throw new ... }.Derr
if (30 < text.Length) throw .... is another option Actually, I prefer (!(text.Length <= 30)) because it nicely matches assert(text.Length <= 30). Think about when multiple conditions get compounded. Keeping the error checking logic in that 'positive assertion' sense helps reduce logic bugs. I know it looks a little strange the first time. It's controversial and I don't push it on others. But try it with an open mind and you might grow to like it better. Or you might not. :-)Minica
to be pedantic, if(text.Length > 30) is equivalent to if(30 <= text.Length) because the comparison goes from exclusive to inclusiveSubastral
s/is equivalent/is not equivalent/ is I think what you meant. In any case, I never said those two were or were not equivalent.Minica
Why not just return your if-condition?Chufa
I would if that was really what was needed. Perhaps my example was a bit too trivial. Imagine something more interesting and useful in the if/else bodies.Minica
It's language dependent, but in C++ 3 > getAirplane() throws a compiler error, but getAirplane() < 3 might not depending on which constructors are defined for your Airplane class.Bruise
N
12

A programming task is only fun while it's impossible, that is up til the point where you've convinced yourself you'll be able to solve it successfully.

This, I suppose, is why so many of my projects end up halfway finished in a folder called "to_be_continued".

Novia answered 2/1, 2009 at 13:14 Comment(0)
F
12

Most Programmers are Useless at Programming

(You did say 'controversial')

I was sat in my office at home pondering some programming problem and I ended up looking at my copy of 'Complete Spectrum ROM Disassembly' on my bookshelf and thinking:

"How many programmers today could write the code used in the Spectrum's ROM?"

The Spectrum, for those unfamiliar with it, had a Basic programming language that could do simple 2D graphics (lines, curves), file IO of a sort and floating point calculations including transendental functions all in 16K of Z80 code (a < 5Mhz 8bit processor that had no FPU or integer multiply). Most graduates today would have trouble writing a 'Hello World' program that was that small.

I think the problem is that the absolute number of programmers that could do that has hardly changed but as a percentage it is quickly approaching zero. Which means that the quality of code being written is decreasing as more sub-par programmers enter the field.

Where I'm currently working, there are seven programmers including myself. Of these, I'm the only one who keeps up-to-date by reading blogs, books, this site, etc and doing programming 'for fun' at home (my wife is constantly amazed by this). There's one other programmer who is keen to write well structured code (interestingly, he did a lot of work using Delphi) and to refactor poor code. The rest are, well, not great. Thnking about it, you could describe them as 'brute force' programmers - will force inappropriate solutions until they work after a fashion (e.g. using C# arrays with repeated array.Resize to dynamically add items instead of using a List).

Now, I don't know if the place I'm currently at is typical, although from my previous positions I would say it is. With the benefit of hindsight, I can see common patterns that certainly didn't help any of the projects (lack of peer review of code for one).

So, 5 out of 7 programmers are rubbish.

Skizz

Fouts answered 2/1, 2009 at 13:14 Comment(2)
There are fewer programmers with the skillset to tackle a problem that no longer matters. Now we have higher levels of abstraction that allow the big picture to come together in more loosely coupled, highly OO ways. Its not that I'm not smart enough to write it, its that I can write something betterBunni
BIOS's and hardware drivers probably feature a lot of assembler. Many embedded systems are assembler only (or primitive C compilers if you're lucky). Even with high level OO, how many coders could write the equivalent of a Spectrum basic interpreter.Fouts
S
12

Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.

Sensualist answered 2/1, 2009 at 13:14 Comment(4)
Better non-development staff with management skills than developer staff without management skills.Riancho
So you reckon every company that employs any developers should have a developer as CEO?Schizophrenia
Yes, if you going to manage people with a special skill set it would be helpful if you also had a background in that skill set. Would you hire a CEO with no Management experience?Sensualist
C-level comparisons are weak. More realistic would be "Would you hire an untrained mechanic to manage mechanics?" Well...yes. I'm not saying that non-developers make better managers of developers, or that management & development abilities are mutually exclusive, but rather the ability to manage an employee is significantly more important to the ability to do the employee's work.Warlord
T
12

Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more

Toulouselautrec answered 2/1, 2009 at 13:14 Comment(4)
That is not controversial; that is fact!Hodgkinson
Most non-consulting programmers are stuck in a rut and live in a company bubble maintaining dinosaur code while never being exposed to anything that challenges there assumptions; except for the occasional outside consultant. How's that for controversial? ;-)Premer
@Diego; true and consultants have an opportunity to become amazing programmers with everything they are exposed to. But in my experience, I've seen too much crap written by hacks who just picked up enough knowledge to make it work, knowing they'd never have to maintain it, and they just don't care.Toulouselautrec
I consulted for many years. There were cases where the company programmers were good but didn't understand how I was doing things, and so were inclined to criticize. Nevertheless, I'm inclined to agree with you - there are half-hearted programmers in contracting positions.Offside
H
12

Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.

Holbrook answered 2/1, 2009 at 13:14 Comment(3)
Programming has a lot in common with cleaning your room. The same principles of organization apply.Tocci
Maybe... rather than dealing with your accounts as bits of paper you abstract them into folders, and encapsulate them in a filing cabinet or box. If you find a way to unit test laundry, let me know!Holbrook
Generally having a plan before building a web site/ desktop app/ house/ nuclear sub is always a good idea! Mapping things out, either with a sketches on a pad of paper, a wireframe, visio, work flow, mind map, whatever. And the training users...I see this missed by even the most brillant programmers. User acceptance in the long run determins your apps success. If they don't understand it, no matter what it does or how well it is done, your app will fail.Jud
H
11

Copy/Paste IS the root of all evil.

Homopolar answered 2/1, 2009 at 13:14 Comment(0)
N
11

I don't know if it's really controversial, but how about this: Method and function names are the best kind of commentary your code can have; if you find yourself writing a comment, turn the the piece of of code you're commenting into a function/method.

Doing this has the pleasant side-effect of forcing you to decompose your program well, avoids having comments that can quickly become out of sync with reality, gives you something you can grep the codebase for, and leaves your code with a fresh lemon odour.

Nyeman answered 2/1, 2009 at 13:14 Comment(2)
This can be taken too far. Often there is a subtle business case for a particular method or implimentation strategy that you cannot convey without several lines of comments.Assumpsit
Quite true, but it's a rule of thumb rather than a hard rule. Indicating subtleties is, after all, what comments are best used for.Nyeman
S
11

Estimates are for me, not for you

Estimates are a useful tool for me, as development line manager, to plan what my team is working on.

They are not a promise of a feature's delivery on a specific date, and they are not a stick for driving the team to work harder.

IMHO if you force developers to commit to estimates you get the safest possible figure.

For instance -

I think a feature will probably take me around 5 days. There's a small chance of an issue that would make it take 30 days.

If the estimates are just for planning then we'll all work to 5 days, and account for the small chance of an issue should it arise.

However - if meeting that estimate is required as a promise of delivery what estimate do you think gets given?

If a developer's bonus or job security depends on meeting an estimate do you think they give their most accurate guess or the one they're most certain they will meet?

This opinion of mine is controversial with other management, and has been interpreted as me trying to worm my way out of having proper targets, or me trying to cover up poor performance. It's a tough sell every time, but one that I've gotten used to making.

Selenodont answered 2/1, 2009 at 13:14 Comment(1)
+1 "do you want the estimate for average case or worst case?" "average case" "then don't treat that estimate as a hard limit" duh!Beersheba
D
11

90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?

Daryl answered 2/1, 2009 at 13:14 Comment(3)
en.wikipedia.org/wiki/Sturgeon's_law applies to everything, even programmers.Mindymine
I agree, unfortunatly almost 90% of the bad programmers think they fall in the 10% category of programmers who don't suck.Premer
@Diego awesome way to put it. That's kind in line with what I said about not having the tools to evaluate ourselves, but much more clear.Daryl
S
11

coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer

Semiweekly answered 2/1, 2009 at 13:14 Comment(3)
Very much agree with this. Did you see that recent thread where the consensus seemed to be that if you can't touch type at 80wpm you aren't a real programmer? Complete nonsense, although people seem to like that sort of testosterone-driven "productivity".Grosz
@ChrisA: I actually read that thread and came back to write this response. While coding, I like to take time dotting my i's and crossing my t's, so to say.Semiweekly
The typing issue isn't that typing faster allows you to type more code. The issue is that if typing is really a second nature, all of your attention can be on what you are coding rather than on typing. Conversely if you are constantly looking at the keyboard and correcting typos, you are wasting a lot of your attention on typing. Your train of thought is interrupted all the time by the mechanical action of typing. Doesn't mean that you are a bad programmer, but you are certainly not as good as you could be if 30% of your attention is stuck on the keyboard. Programmer, master your tools.Rumney
D
11

This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.

Deva answered 2/1, 2009 at 13:14 Comment(5)
Its not so much about code bloat but more about letting the page degrade gracefully.Fey
And you think div's and css does this? I don't.Deva
I always try to make a layout that avoids tables, and I always fail. Div-based layouts just don't have the flexibility of a table. +1Skyward
Marcus: Are you kidding? Use tables for what they were meant for - tabular data.Hypogeous
I'm starting to believe in using CSS frameworks like blueprint and 960. These seem to be giving me the consistency along with it being a lot easier to make the layout. Seems to be meeting my needs so I'm pretty jazzed.Deva
S
11

The worst thing about recursion is recursion.

Stipulation answered 2/1, 2009 at 13:14 Comment(3)
Before you understand recursion, you must first understand recursion.Szechwan
Recursion, n. See recursion.Rosebay
If that's what you reckon, you should check this out.Torrietorrin
A
11

A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.
Adjudge answered 2/1, 2009 at 13:14 Comment(5)
"Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax." - you just broke many hearts, some people learn new language every year.Miscegenation
And it gets easier and easier, doesn't, doesn't it?Adjudge
"you finally realize that all programming languages are the same" -- you hear that a lot from people who have only programmed in C#, C++, flavors of VB, Java, and maybe Python. Then you finally learn Haskell, Ocaml, Erlang, Prolog, and Lisp, and you feel like an idiot for having missed so much.Horseshit
It's always nice to have lots of toys, but we know they all serve the same purpose - to entertain us in some way. Likewise with every programming language I've seen over the past forty some odd years. As mentioned above, it's all about algorithm - not syntax.Adjudge
@cookre: try to use algorithms designed to be expressed in an imperative programming language (PL) with a pure lazy functional PL like Haskell or in a (constraint) logic PL like Prolog (and derivatives) or in a PL designed for fault tolerance and massive concurrency, like Erlang and you will discover that semantics differences are all that really counts.Cnidoblast
E
11

Don't use stored procs in your database.

The reasons they were originally good - security, abstraction, single connection - can all be done in your middle tier with ORMs that integrate lots of other advantages.

This one is definitely controversial. Every time I bring it up, people tear me apart.

Electrophone answered 2/1, 2009 at 13:14 Comment(3)
I worked on a project that I consider to be an exception to this rule, but it did mean constantly hitting against all the reasons I mostly agree with you. They're not a good solution, 99% of the time.Skyward
SQL is just another language? Tough to reason with that mindset.Aunt
SPROCs eliminate SQL injection attacks. In MSSQL they are pre-compiled (and hence faster). @Christopher, can you give me the address of any websites that you built? I want to make some money :P.Coquillage
G
11

SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.

Gantrisin answered 2/1, 2009 at 13:14 Comment(3)
+1 for the abomination. Anything that's harder to read than write has got to be wrong.Grosz
This is a statement that two things that had been around for a long time, and have been heavily used, would be much better done if they'd known then what we know now. That is much closer to being a tautology than a controversy.Rosebay
html layout is a lot easier than assembling widgets in C++Warehouse
O
11

Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.

Ovariectomy answered 2/1, 2009 at 13:14 Comment(5)
I will tell you from having dealt with entry-level and junior developers that they learn precisely nothing by performing "maintanence and bug fixes", they never develop any skills. Letting juniors build an app something from scratch teaches them an incredible amount in a short period of time.Horseshit
Quite so. Aptitude has very little to do with experience, which often just entrenches bad habits.Grosz
I would say the exact opposite. Let them write implementations of existing interfaces, that must pass existing unit tests. They will pick up some design skills just by working with the senior developer's designs for a few months.Schizophrenia
@Juliet, absolute rubbish. When I was an entry-level developer I did maintenance and bug fixwork and learnt directly why consistency and separation of concerns is so essential in software. Maintaining code with "issues" it THE best way to improve your own designs.Mackler
Nothing teaches you the value of doing things the right way like the pain of doing things the wrong way and then having to live with the results.Ebullient
S
10

Intranet Frameworks like SharePoint makes me think the whole corporate world is one giant ostrich with its head in the sand

I'm not only talking about MOSS here, I've worked with some other CORPORATE INTRANET products, and absolutely not one of them are great, but SharePoint (MOSS) is by far the worst.

  • Most of these systems don't easily bridge the gap between Intranet and Internet. So as a remote worker you're forced to VPN in. External customers just don't have the luxury of getting hold of your internal information first hand. Sure this can be fixed at a price $$$.
  • The search capabilities are always pathetic. Lots of time other departments simply don't know about information is out there.
  • Information fragments, people start boycotting workflows or revert to email
  • SharePoint development is the most painful form of development on the planet. Nothing sucks like SharePoint. I've seen a few developers contemplating quitting IT after working for over a year with MOSS.
  • No matter how the developers hate MOSS, no matter how long the most basic of projects take to roll out, no matter how novice the results look, and no matter how unsearchable and fragmented the content is:

EVERYONE STILL CONTINUES TO USE AND PURCHASE SHAREPOINT, AND MANAGERS STILL TRY VERY HARD TO PRETEND ITS NOT SATANS SPAWN.

Microformats

Using CSS classes originally designed for visual layout - now being assigned for both visual and contextual data is a hack, loads of ambiguity. Not saying the functionality should not exist, but fix the damn base language. HTML wasn't hacked to produce XML - instead the XML language emerged. Now we have these eager script kiddies hacking HTML and CSS to do something it wasn't designed to do, thats still fine, but I wish they would keep these things to themselves, and no make a standard out of it. Just to some up - butchery!

Schaeffer answered 2/1, 2009 at 13:14 Comment(4)
Your programming opinion doesn't look very controversial to me. In fact I can't even see what your programming opinion is.Thermostat
I agree with your attacks on sharepoint. In my dealings with the beast, there is a lot of confusion about what it can and should do. I guess that comes from the office world were people abuse, word, excel, and access to do ungodly things that should be handled by programmers creating real applications. The running joke around sharpoint's abilities at my work is that it can "wash your car", or "mow your lawn" or that it has infinite super powers.Brann
I agree that this is not controversial. As a MOSS dev I can only conclude that SP was written by Microsoft's best team of monkeys with down syndrome.Hygiene
What is controversial is that MOSS is considered by most business users to be a perfect all round intranet solution, but honestly its a pile of dog crap under the hood.Schaeffer
R
10

Microsoft should stop supporting anything dealing with Visual Basic.

Retainer answered 2/1, 2009 at 13:14 Comment(5)
I've been saying that since Visual Basic 1.0.Consistence
Microsoft should stop. Period.Paction
Fully agree, why have VB.net? any VB.net developer can covert to C#. I know because I used to be a VB6 developer.Schaeffer
Is that even controversial at all? :DKlein
Visual Beginners All-purpose Symbolic Instruction Language. Damn, Microsoft is still just a beginner, a n00b.Nesmith
P
10

Making software configurable is a bad idea.

Configurable software allows the end-user (or admin etc) to choose too many options, which may not all have been tested together (or rather, if there are more than a very small number, I can guarantee will not have been tested).

So I think software which has its configuration hard-coded (but not necessarily shunning constants etc) to JUST WORK is a good idea. Run with sensible defaults, and DO NOT ALLOW THEM TO BE CHANGED.

A good example of this is the number of configuration options on Google Chrome - however, this is probably still too many :)

Pyrex answered 2/1, 2009 at 13:14 Comment(2)
Agreed. Make a design decision for the user and stick to it.Beeves
Making software is a bad idea.Nesmith
W
10

Recursion is fun.

Yes, I know it can be an ineffectual use of stack space, and all that jazz. But some times a recursive algorithm is just so nice and clean compared to it's iterative counterpart. I always get a bit gleeful when I can sneak a recursive function in somewhere.

Warlord answered 2/1, 2009 at 13:14 Comment(3)
"Ineffectual use of stack space" -- only in crappy languages. See en.wikipedia.org/wiki/Tail_recursionHorseshit
That's what's great about being a programmer - cheap thrills :-) At least Electrical Engineers get to sniff rosin smoke.Offside
@Juliet: Only crap languages? So all languages that don't have tail recursion are crap? Spare me.Warlord
B
10

Coding is an Art

Some people think coding is an art, and others think coding is a science.

The "science" faction argues that as the target is to obtain the optimal code for a situation, then coding is the science of studying how to obtain this optimal.

The "art" faction argues there are many ways to obtain the optimal code for a situation, the process is full of subjectivity, and that to choose wisely based on your own skills and experience is an art.

Barvick answered 2/1, 2009 at 13:14 Comment(1)
Electronics designers will always tell you that designing electronic circuits is 'an imprecise science'. I think the opposite is true of constructing computer programs - it is an exact art. I think this partly because I don;t know where my programming ability comes from. I sit at the keyboard and "it just happens". I'm not following any rules or processes when I write code, thereore it is an art. But whatever I write has to be exactly right, or it will not work. Hence, it is an exact art.Omniscience
T
10

Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!

Trulatrull answered 2/1, 2009 at 13:14 Comment(1)
Sometimes I wonder if employers are to blame for not acknowledging, demanding, and rewarding good talent.Compote
S
10

Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.

Salvage answered 2/1, 2009 at 13:14 Comment(0)
H
10

The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.

Hammons answered 2/1, 2009 at 13:14 Comment(0)
P
10

My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.

Planetesimal answered 2/1, 2009 at 13:14 Comment(0)
B
10

Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.

Barbiturate answered 2/1, 2009 at 13:14 Comment(6)
I've certainly forgotten to type "self" many times myself, but what would you have done instead? You can't just imply self in all method declarations because of classmethods and staticmethods.Thrombophlebitis
I often mistype it as slef and I get errors because self is undeclaredWarehouse
I think that def in class should imply self, and other types of methods could use different/additional keyword, like defstatic/static def.Hahnke
It's actually due to an implementation problem early on in the language design -- apparently Guido and team could not figure out how to bind the implicit self parameter to its enclosing environment, short of just passing it explicitly. Hope I got that right, not a compiler/translator guru.Prescriptible
Please read around and reconsider your opinion: effbot.org/pyfaq/… and artima.com/weblogs/viewpost.jsp?thread=214325 are two good places to start.Wooley
@Daz: links you've given talk about either body of a function (but I'm talking about declaration of arguments) or semantics of functions being 1st class (which is completely orthogonal issue to the syntax).Hahnke
P
10

How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.

Pistoia answered 2/1, 2009 at 13:14 Comment(7)
I've seen 50mb leaked bescause some library programmer hooked an event and didn't make absolutely sure to unhook it.Heliotherapy
8gb RAM is nothing to a repetitive leak on a server under high load.Vaasta
I guess it refers to RIIA idiom. In that case I must adhere to the proposal. RIIA is a solution for all resources, GC is a partial solution for memory resources only.Dalury
+1 to that. Before GC, programmers took care of leaks before deployment. These days, applications are deployed and then when a 100 users are using the application, we discover that we've run out of database connections.Hammons
Anyone who expects garbage collection to handle all resource management has desperately misunderstood garbage collection. GC is only for managing memoryAhoufe
I'd give a +1 if you had said: "GC because it's not available for all resoures; only memory. So you can leak DB connections." GC has solved 100 issues and introduced 20 new ones, so it's still an advantage.Calctufa
Which "100 issues"? It has solved only one - memory management, and IMHO even that poorly.Pistoia
L
10

The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.

Lorie answered 2/1, 2009 at 13:14 Comment(0)
T
10

Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.

Terrijo answered 2/1, 2009 at 13:14 Comment(6)
I concur: you can't version stored procedures, and having 200+ stored procedures in a large project becomes a maintenance nightmare. Embedded SQL is ok for small projects, but I'd rather use an ORM to write my queries for me.Horseshit
Princess: I must disagree with your statement that you can't version stored procedures. I version them myself by keeping the SQL for them in source code control. If you make a change to the database, re-export the script for it and check it into the repository.Roo
I agree about versioning stored procedures. If you are writing SP, you need to take it upon yourself to version them in source control.Lieutenant
Out of your database? There speaks a 1970s DBAGrosz
We can version SPs. The build process moves them from source control into the database.Heliotherapy
In DB2/400 stored procedures are an interface to native code on the system... In other words, hard to move over to the calling system.Eward
A
9

Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.

Alva answered 2/1, 2009 at 13:14 Comment(6)
C may be fast to execute, but dynamic, interpreted languages are faster to develop in. I think you're being a little close-minded here.Thrombophlebitis
C is NOT the right tool for everything! it's not the tool for web development! there's that at least!Warehouse
What are dynamic, interpreted languages good for, besides Web development? Note, I happen to hate Web apps.Alva
Sure, dynamic languages should be burned. From now on I shall always compile my shell scripts to machine code.Constipate
Dynamic languages are good for different jobs. They tend to be ideal for quick and dirty throw away scripts for admin stuff, as well they tend to be better geared for applications that require a lot of string manipulation and need to be developed quickly.Characharabanc
That's 3 opinions in one answer, and they're all dupesSchizophrenia
A
9

Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.

Astonish answered 2/1, 2009 at 13:14 Comment(5)
It depends on the rawness of your original data. If the data is accumuleted by data entry in a UI it is true. But if you do something like Text Mining you need to process your data first, algos become more important.Riancho
tuinstoel: ok, but text mining is eminently parallelisable, so the algo should be ultra simple and then be run by a few hundreds or thousand processes. Image processing needs solid algos though.Astonish
I would agree if you also mean that data should be kept as minimal and normalized as reasonable. I see far too much data structure whose ostensible purpose is "better performance" that causes the opposite.Offside
+1 If I was speaking to an assembly of CS Freshmen my first advice would be to "Know Thou Data_Structures" Amen Brother.Ensue
Brooks, in "The Mythical Man-Month", had a comment that he'd be confused if you hid your tables and showed him your flow charts, but if you showed him your tables he wouldn't need to see your flow charts. This should give you an idea of how old this idea is.Rosebay
H
9

My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.

Hellespont answered 2/1, 2009 at 13:14 Comment(9)
Define long. I've seen a 13,000 line switch statement (admittedly it was C++ but still...)Mindymine
Well, (in c#) if the switch statement is generated (as opposed to manually edited), I see nothing wrong with a 13K line switch statement to be honest. It's going to end up as a hashtable anyway.Hellespont
Of course, if it has 13K lines because there is loads of code in each "case" clause, that's totally different. It should be refactored then.Hellespont
Actually, I do. Was it either that or if, and replacing all if's with switch's would have been a bit too verbose, even for python?Henriettehenriha
What I want a compiler to do is generate good assembly code for me, and switch is how I tell it I want a jump table. That said, it's easy to think you're doing things for "performance" reasons when in fact you'll never notice the difference.Offside
@Mike: if you you have a switch statement with thousands of cases, you will notice the performance difference between a jump table and a series of if-else statements.Hellespont
How can you have thousands of cases? I can't imagine it, do you have an example?Riancho
@tuinstoel: It's not that hard to imagine it if you try. Before the rise of floating point units, it was a common practice to keep trigonometric functions in lookup tables. I think that keeping the results of complex math functions in premade lookup tables still makes sense today.Hellespont
Great answer. Agree completely.Coquillage
S
9

That most language proponents make a lot of noise.

Sizzle answered 2/1, 2009 at 13:14 Comment(1)
Controversial, and simultaneously axiomatic. Nice.Grosz
D
0

Human brain is the master key to all locks.

There is nothing in this world that can move faster your brain. Trust me this is not philosophical but practical. Well as far as opinions are concerned , they are as under


1) Never go outside the boundry specified in the programming language, A simple example would be pointers in C and C++. Dont misuse them as you are likely to get the DAMN SEGMENTATION FAULT.

2) Always follow the coding standards, yes what you are reading is correct, Coding standards do alot to your program, After all your program is written to be executed by machine but to be understood by some other brain :)

Derrik answered 2/1, 2009 at 13:14 Comment(0)
F
0

The C++ STL library is so general purpose that it is optimal for no one.

Filch answered 2/1, 2009 at 13:14 Comment(1)
'The' and 'STL library' don't belong in that sentence. Remove them.Nesmith
M
0

C must die.

Voluntarily programming in C when another language (say, D) is available should be punishable for neglect.

Meath answered 2/1, 2009 at 13:14 Comment(2)
Disagree. If C is the language you are more comfortable in, and is suitable for the task, then C is the language that would make most sense for you to develop in. If you're already proficient in C, then why waste the time learning D (as you put it) if you could complete the task to an acceptable standard using C?Erivan
The answer is real easy: you and other people will forever have to clean up the things D helps you prevent in your C code, unless you belong to the top 0.5% of C programmers who never makes such mistakes in the first place. (it may be 0.05%, I'm not sure). There are certainly tools for C which help prevent such mistakes as well. The trouble is you can't count on other people having used them.Meath
B
0

You must know C to be able to call yoursel a programmer!

Beverleebeverley answered 2/1, 2009 at 13:14 Comment(6)
Completely disagree. C isn't the be-all-and-end-all of programming. There were many languages before it, and there are many languages after it that will suit different situations better than C will. Also, programming is about the analytical problem solving, and not just writing code in a particular language.Erivan
Like Jasarien I'm completely disagree. C is another language, is not THE language.Kolinsky
Actually, C is pretty much THE language for some tasks, although certainly not for all. There is a lot of documentation and tutorials online - specially on low-level stuff - which are way harder to understand without C knowledge.Month
More people use C than any other language and it's used on more projects than any other language.Stelle
agreed. I wonder, would you say you can call yourself a programmer if you know D and not C? (D doesnt hide anything from you alike C).Tonjatonjes
Depends on what you want to make. High level Windows GUI applications should not be made in C, low level ICU programming, C is required.Arsenious
R
0

Apparently mine is that Haskell has variables. This is both "trivial" (according to at least eight SO users) (though nobody can seem to agree on which trivial answer is correct), and a bad question even to ask (according to at least five downvoters and four who voted to close it). Oh, and I (and computing scientests and mathematicians) am wrong, though nobody can provide me a detailed explanation of why.

Ridicule answered 2/1, 2009 at 13:14 Comment(5)
Even though I respect math, I'd have to disagree. Those aren't variables. Those are contants. Variables should be... well... variable. I believed Haskell has no variables because "x = x + 1" isn't possible. You use functions, you don't really change the value of x. HOWEVER, that post mentioned IORef, so maybe Haskell does have variables...Month
Well, go put an answer up on the question to which I linked showing why, in the definition "double x = x * 2", "x" is a constant.Edelmiraedelson
"double x = x * 2" makes no sense in no language. Not even C.Month
It's an equation, saying that the left and right sides are equivalent (i.e., "double 3" means the same thing as "3 * 2"), and not only does it make perfect sense in mathematics, but it's perfectly valid Haskell code.Edelmiraedelson
So haskell is single-assignment within the bounds of a particular scope, and you can only "change" the value of x by reintroducing a new inner scope, which is what "double x= x *2" really does, right? It doesn't change the value of x at all, it just overloads the identifier x with a new (temporary) value at a particular scope.Aila
T
0

Logger configs are a waste of time. Why have them if it means learning a new syntax, especially one that fails silently? Don't get me wrong, I love good logging. I love logger inheritance and adding formatters to handlers to loggers. But why do it in a config file?

Do you want to make changes to logging code without recompiling? Why? If you put your logging code in a separate class, file, whatever, what difference will it make?

Do you want to distribute a configurable log with your product to clients? Doesn't this just give too much information anyway?

The most frustrating thing about it is that popular utilities written in a popular language tend to write good APIs in the format that language specifies. Write a Java logging utility and I know you've generated the javadocs, which I know how to navigate. Write a domain specific language for your logger config and what do we have? Maybe there's documentation, but where the heck is it? You decide on a way to organize it, and I'm just not interested in following your line of thought.

Tayyebeb answered 2/1, 2009 at 13:14 Comment(2)
"Do you want to make changes to logging code without recompiling?Why?" All the time. I have a deployed server that has no reason to log the finest detail when it's serving production traffic, but I have to be able to turn logging on when something goes wrong. Perhaps you just don't work on the type of applications for which this is necessary, but it's not a superfluous feature.Daredeviltry
Fair enough. That's actually a scenario that I have some experience with...but the difference is that the compile time in the cases I deal with is < 2 min. I know I have to restart the server if I change the logging config...recompiling doesn't seem like such a big deal to me in light of that.Tayyebeb
R
0

"Don't call virtual methods from constructors". This is only sometimes a PITA, but is only so because in C# I cannot decide at which point in a constructor to call my base class's constructor. Why not? The .NET framework allows it, so what good reason is there for C# to not allow it?

Damn!

Reflectance answered 2/1, 2009 at 13:14 Comment(0)
S
-1

Copy/Pasting is not an antipattern, it fact it helps with not making more bugs

My rule of thumb - typing only something that cannot be copy/pasted. If creating similar method, class, or file - copy existing one and change what's needed. (I am not talking about duplicating a code that should have been put into a single method).

I usually never even type variable names - either copy pasting them or using IDE autocompletion. If need some DAO method - copying similar one and changing what's needed (even if 90% will be changed). May look like extreme laziness or lack of knowledge to some, but I almost never have to deal with problems caused my misspelling something trivial, and they are usually tough to catch (if not detected on a compile level).

Whenever I step away from my copy-pasting rule and start typing stuff I always misspelling something (it's just a statistics, nobody can write perfect text off the bat) and then spending more time trying to figure out where.

Sigmund answered 2/1, 2009 at 13:14 Comment(1)
If you think getting code to compile is a big problem... (shakes head)Hemicellulose
O
-1

To quote the late E. W. Dijsktra:

Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.

Computer Science is no more about computers than astronomy is about telescopes.

I don't understand how one can claim to be a proper programmer without being able to solve pretty simple maths problems such as this one. A CRUD monkey - perhaps, but not a programmer.

Oireachtas answered 2/1, 2009 at 13:14 Comment(0)
A
-1

That WordPress IS a CMS (technically, therefore indeed).

https://stackoverflow.com/questions/105648/wordpress-is-it-a-cms

Administration answered 2/1, 2009 at 13:14 Comment(1)
Not exactly, it is a CMS focussed on blogging. Like MySpace is a social network focussed on music. And they are both terrible.Nesmith
L
-2

A real programmer loves open-source like a soulmate and loves Microsoft as a dirty but satisfying prostitute

Leggat answered 2/1, 2009 at 13:14 Comment(1)
Haha, very funny. Good one :)Schaeffer
D
-2

"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.

Dariusdarjeeling answered 2/1, 2009 at 13:14 Comment(3)
In the working world it is not an option to rewrite code "the way you want it" you have to deal with what is there regardless of who wrote it. The rest of your post is incomprehensible.Gerge
I totally disagree with you: do not reinvent the wheel, they say!Perfuse
I totally agree that this is the most controversial statement.Baxie
R
-4

Software sucks due to a lack of diversity. No offense to any race but things work pretty when a profession is made up of different races and both genders. Just look at overusing non-renewable energy. It is going great because everyone is contributing, not just the "stereotypical guy"

Rafter answered 2/1, 2009 at 13:14 Comment(2)
I agree that programming is a privileged white collar job that attracts privileged people, and that it's an ol' boys club. But this really only hurts the quality of life at the workplace (NO, I do not want to talk about anime at work), not the quality of software.Booher
Wow... I don't know where you people are working (and no, not that "you people"). The last few places that I have worked are diversified and definitely not a privileged position. Maybe if you are a COBOL programmer from the 60s...Redfield
M
-4

Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.

Madge answered 2/1, 2009 at 13:14 Comment(8)
What does this even mean? "Hey, I just released a patch that deleted the customer's requested functionality because I felt like it but it's ok because I have documented it and told you that I did it." Is that the kind of thing you were suggesting?Gerge
This could happen if a programmer has poor judgement, but I ultimately believe developers have better judgement than they are given credit for. They should be allowed to fix bugs without a bunch of friction. I believe in trust over regulation with the developers I work with.Madge
+1. Why the downvotes? Maybe doing the kind of work that demands that level of scrutiny removes the ability to see that there's more than one kind of coding environment? There's no manager to lean on when your world-view-interpreter algorithms are wonky.Vocal
I could count on one hand the number of programmers I know that I would trust in that sort of environment - too many cowboys out there.Scrobiculate
Ok, I would start by modifying all the code you wrote. It would be interesting to see if you would still feel the same way.Premer
@Eric Mills: Go work for a bank, or qualify your answer. Maybe you are unaware or underestimating the impact erroneous (or even malicious) code changes can have on a company. Hours of work lost, bazillions of space credits blown. Careers have been destroyed over these kinds of things, people fired on the spot. Probably not something you'll understand until you are personally responsible for an insanely important system...and some cowboy wants to tweak it at will.Warlord
At least, in all systems i worked with, this logic would be a very bad policy. Could you provide us with an environment where you would like this to occur?Perfuse
I wouldn't even want myself in that kind of environment. Forgetting matters of trust or judgment for a second, with any project with more than 1 dev you run into concurrency issues. We both coded well, but our updates clashed . . . in production. And do you truly think QA before release is useless? Any important or sizable project has those checks for a reason. Some non-important, non-sizable projects with 1 or 2 devs and a knowledgeable user base (e.g., some games) can and do practice this, but they are exceptions - not the rule.Resumption
F
-8

My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)

Frustule answered 2/1, 2009 at 13:14 Comment(11)
What if you're unaware of when a condition is false? And where has Repeat come from? While works on the English basis of "while this condition is true, do this"Wideopen
You could replace all constructs with goto.Concatenation
Not only do I like WHILE but I would also borrow Nemerle's UNLESS and put it into C#.Brubaker
a language designed for mediocre or unexperienced programmers gets only mediocre and unexperienced users.Amaranthaceous
I haven't seen Repeat...Until since BBC BASIC! VB now has Do...Loop, Repeat...Until and While...Wend should both be removed. It bugs me though when I see, Do While Not ... instead of Do Until ...Ornate
The first question I usually ask when I see a While loop is "Will it break during the loop or after the check?" The reason for this is I've used a language or two before that immediately broke out of the loop when the condition returned false.Tanya
This is nonsense. Neither repeat nor while will break in the middle so your argument is absurd. Basically the developers need to be instructed in the use of break/exit/goto to exit a loop early. As for testing condition at the beginning/end both have their uses.Decided
Also do { statements } while (!condition) is the same as do { statements } until (condition) so I don't know what the complaint is.Decided
Actually, I'm not sure if it's the same or not, but I never use do ... while blocks, so I think perhaps I agree with you. :)Dunkin
"One common ... flags false" - How common is this? In what language? Perhaps the answer for those who have this idea when it's false is "RTFM!". This is just a bad solution looking for a problem it can't find.Gerge
A while with a repeat is a if <not condition> then repeat until condition not a while + boolConvince
A
-10

If it's not native, it's not really programming

By definition, a program is an entity that is run by the computer. It talks directly to the CPU and the OS. Code that does not talk directly to the CPU and the OS, but is instead run by some other program that does talk directly to the CPU and the OS, is not a program; it's a script.

This was just simple common sense, completely non-controversial, back before Java came out. Suddenly there was a scripting language with a large enough feature set to accomplish tasks that had previously been exclusively the domain of programs. In response, Microsoft developed the .NET framework and some scripting languages to run on it, and managed to muddy the waters further by slowly reducing support for true programming among their development tools in favor of .NET scripting.

Even though it can accomplish a lot of things that you previously had to write programs for, managed code of any variety is still scripting, not programming, and "programs" written in it do and always will share the performance characteristics of scripts: they run more slowly and use up far more RAM than a real (native) program would take to accomplish the same task.

People calling it programming are doing everyone a disservice by dumbing down the definition. It leads to lower quality across the board. If you try and make programming so easy that any idiot can do it, what you end up with are a whole lot of idiots who think they can program.

Amido answered 2/1, 2009 at 13:14 Comment(9)
This sounds like argumentative nonsense to me. Suppose I compile a program which satisfies your definition... but then run it in VMWare or something like that. Does that make it a "script" because it's running virtually? Of course not. Likewise if you're dismissing Java as "not programming" would your view change if at any point anyone brought out a "Java CPU" (if such things don't exist already)? Yes, there are plenty of arguments for not trying to "dumb down" programming too much - but the way you're putting it takes that much too far.Derr
With all due respect for you and your obvious intelligence, I have to disagree. A VM is just an abstraction of the hardware. The program is still capable of running directly on the hardware and talking to it. By contrast, if someone built a Java CPU, you still wouldn't be able to write an OS or device drivers for it in Java. (No pointers, etc.)Amido
So it would have to be able to do more than just Java - but it would still be able to execute Java code natively. Would all the "non-programmers" in the world who are currently writing Java suddenly become programmers in your view? Sorry, I still can't see this as a sensible or useful delineation at all.Derr
You might also want to try convincing the Wikipedians, who certainly include scripts as programs, even leaving aside the question of whether Java is a script or not: en.wikipedia.org/wiki/Computer_programDerr
I seem to remember that UCSD Pascal compiled to p-code, which was then interpreted, but Pascal has certainly always been considered a programming language and not a scripting language. The colege I was at did also have something they called a Pascal Microengine, which could execute p-code natively. So the distinction is somewhat arbitrary and defies definition.Omniscience
Gee, a Delphi programmer ridiculing code that runs on a framework! What a surprise! Self-deluded, elitist crap.Mackler
Delphi has a framework too. It's called the VCL. The difference is that it's native code, and it tends to add a few hundred kilobytes to your application, as opposed to .NET, which adds a few hundred MEGABYTES of dependencies.Amido
also, what about the Burroughs machines that ran COBOL natively as their assembly language?Subastral
www.ajile.com. Hardware CPU runs java natively, realtime with direct access to the hardware.Formate
C
-12

Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.

Cowpoke answered 2/1, 2009 at 13:14 Comment(15)
Or you could make your entire program one (reaaaly long) line of code. That's always fun.Thrombophlebitis
BAKA!! even in a functional language, like haskell, you can have several lines in a function!Warehouse
When one combines the rule that a class should fit on the screen and every method has only one line a class can contain only approximately 7 lines of code.Riancho
I'm amused that this is currently the lowest-ranked answer; I think I've succeeded at the "controversial" part.Cowpoke
It is indeed controversial, so I up.Riancho
I agree completely, when will people see the light? I use Perl so I don't know how to write a function with more than one line of code, also, what is this "Refactor" thing you speak of? :-OReveal
You must be a functional programmer... but one line per function is still a little extreme ;)Ammonify
I'm sorry this is nonsense. -1 from meAryn
It's not controversial - it's inane.Penile
That depends on your definition of "line". For some methods even a single line is too much.Legitimate
No method I've ever written (as far as I recall) has just one line of code =)Stockstill
int screwYou() { printf("This is balls...\n"); }Erivan
Typically, when I write a VOID dummy method, just due to formatting conventions, it takes at least two lines. Non-void functions typically take three lines. Of ocurse, like Kiv said, you can have 10.000 characters in a single line - so "lines" might not be the best metric for program size counting.Month
This is controversial because I do not think you can apply this type of statement to all languages.Godson
@atconway: C++ fails, because you can't do anything useful in one statement. Perl fails because even one line is confusing. (To all: there is sanity behind this, but I was going for shock value.)Cowpoke

© 2022 - 2024 — McMap. All rights reserved.