You are not logged in.
Hi,
let us know about your found.
TIA(that's all)
I removed my sig, cause i select the flag, the flag often the target of enemy.
SAR brain-tumor
[img]http://img91.imageshack.us/img91/460/cellphonethumb0ff.jpg[/img]
Offline
my biggest tip would be to write several small and rather generic functions that you can later call on in your main(). Think of them as building blocks that you can use and reuse over and over again. Its a little hard to build a castle out of a house.
Offline
Its a little hard to build a castle out of a house.
It will well fit in gedit's code snippets plugin.
http://live.gnome.org/Gedit/SnippetPlugin
I removed my sig, cause i select the flag, the flag often the target of enemy.
SAR brain-tumor
[img]http://img91.imageshack.us/img91/460/cellphonethumb0ff.jpg[/img]
Offline
Write small functions like Penguin says, then test them thoroughly before you use them together. With luck, you can catch bugs before they find a place to hide.
Offline
Comment profusely.
Unit test.
Design for a) extensibility and b) maintainability.
Dusty
Offline
Unit test.
Design for a) extensibility and b) maintainability.
Dusty
Using an OO language makes this much simpler, IMHO.
I wouldn't "Comment profusely." I've marked undergrad programming exercises where commenting was necessary. However, some people went overboard and it just made the code unreadable.
I would comment "where necessary", especially where you've just coded something that you think is "cool", because it probably means you've just done something new and fancy that won't make sense to you or any one else later on.
Offline
However, some people went overboard and it just made the code unreadable.
# The following call will wash your dog,
# fetch your morning paper, make you coffee,
# and do everything you ever wanted to do...
# if that involves adding two numbers that is,
# because that is what happens here. We add two numbers
# wewt! Shout out to arooaroo, phrakture, and monkeys.
a = add(1,2)
# a should be 3 now! zomg!
i would have to say though, that I prefer more commenting, then no commenting, but I agree.... a happy medium should be struck. I think poeple generalize and say 'do more commenting', because it is often held that most people do too little.
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
Using an OO language makes this much simpler, IMHO.
That's pretty much a given nowadays, although I'm on the "object oriented is useful sometimes, but shouldn't be required all the time" team.
I wouldn't "Comment profusely." I've marked undergrad programming exercises where commenting was necessary. However, some people went overboard and it just made the code unreadable.
Yeah, same here -- I actually took marks off one assignment for over-commenting just this week. By "profusely", I mean, a hell of a lot more than most people do, or at least than most grad students do. Or than most kernel developers do. I'd say a good threshold is "1/3 to 1/2 of the total lines of code should be comment lines". 1/3 for Python, 1/2 for Java.... ;-) I'm guessing you will argue that is a bit much, but I don't mean commenting each line, I mean having a GOOD description of every class, method, and function. Something pydoc or javadoc or doxygen will find useful.
Having said that, I actually think having inline comments that are not at the method or function level is an indication that the method is getting too big and complex and should be divided in half.
I would comment "where necessary", especially where you've just coded something that you think is "cool", because it probably means you've just done something new and fancy that won't make sense to you or any one else later on.
Everything I write is cool. ;-) (I always find some python module or another that has done the boring stuff already.)
one and a half more tips:
Spend more time refactoring than coding.
Focus on elegant code.
Dusty
Offline
The greatest tip of all: learn how to "read code". No, it's not as simple as knowing the syntax. Code reading is very very difficult.
When I can give you something in a language you don't know, and you can tell me the section of code something happens in, you have it down.... hurry, find me the section in ejabberd that handles the iq protocol (without grep, fool!)
Offline
Tip (call this the "poor man's goto"):
do {
if (do_something() == SOME_RUDE_ERROR) break;
ok_cool_we_survived_that();
if (do_something_more_crazy() == SOME_ERROR) break;
do_some_final_thing();
} while (FALSE);
If you compile this and examine the Assembly code, you will find nothing but branch statements -- equivalent to goto statements. The compiler intelligently throws out the pointless 'while (FALSE);' conditional.
Offline
while (FALSE);
Programming tip #2: C99 is your friend 8)
while(false);
Offline
Dusty wrote:Unit test.
Design for a) extensibility and b) maintainability.
Dusty
Using an OO language makes this much simpler, IMHO.
There are certain classes of problems that lend themselves nicely to OO but there are plenty of others where OO actually complicates the design. In some cases, designing for a functional language can produce vastly superior results. In other cases good old fashioned low-level bit bashing is the cleanest solution.
The real key is understanding all these different design techniques so you can find the right match for the problem at hand. Learn as many languages and as many design techniques as you possibly can. Then chose a design that fits the problem and a language that fits the design and you maximize your chances of ending up with nice clean elegant code.
And remember that different parts of the problem may lend themselves to completely different design techniques. An OO GUI framework bound to a dynamic scripting language using functional programming for algorithm designs and a collection of libraries providing low-level implementations of hardware specific or speed critical primitive functions may be the right solution. Don't be afraid to mix and match different languages in a single application. C, C++, and Java programmers seem most likely to suffer the one language fits all afliction.
Offline
Quoted for truth!
There are certain classes of problems that lend themselves nicely to OO but there are plenty of others where OO actually complicates the design. In some cases, designing for a functional language can produce vastly superior results. In other cases good old fashioned low-level bit bashing is the cleanest solution.
The real key is understanding all these different design techniques so you can find the right match for the problem at hand. Learn as many languages and as many design techniques as you possibly can. Then chose a design that fits the problem and a language that fits the design and you maximize your chances of ending up with nice clean elegant code.
And remember that different parts of the problem may lend themselves to completely different design techniques. An OO GUI framework bound to a dynamic scripting language using functional programming for algorithm designs and a collection of libraries providing low-level implementations of hardware specific or speed critical primitive functions may be the right solution. Don't be afraid to mix and match different languages in a single application. C, C++, and Java programmers seem most likely to suffer the one language fits all afliction.
Offline
Tip: When you're stuck, grab the janitor.
I found that explaining a problem is usually the key to solving it. Ever post a question to a list only to find the solution yourself before anyone has a chance to reply? Me too, and far too often. Feels like you're wasting people's time when that happens. But it works!
In fact, years ago I discovered anyone, technical or not, will do. Explain a problem to someone and even though the subject may be unfamiliar to them (even way over their head) the result is the same. Mid sentence, "Oh! That's it!".
Late at night, on those long code marathons, the janitors would learn to empty my trash last... or not at all. :evil:
.
Offline
Quoted for truth!
Kopsis wrote:<but>
Very well said, friend Kopsis!
Offline
Comment profusely.
Code in such a way that comments are not necessary.
Offline
Dusty wrote:Comment profusely.
Code in such a way that comments are not necessary.
That's just simply not possible for 100% of the time, and besides, it's all relative. An experienced programmer may believe they are writing the most lucid code ever witnessed by mankind, yet a less experienced programmer may have difficulty comprehending.
Offline
Dusty wrote:Comment profusely.
Code in such a way that comments are not necessary.
Use Python .
Offline
tranquility wrote:Dusty wrote:Comment profusely.
Code in such a way that comments are not necessary.
That's just simply not possible for 100% of the time, and besides, it's all relative. An experienced programmer may believe they are writing the most lucid code ever witnessed by mankind, yet a less experienced programmer may have difficulty comprehending.
I understand what you're saying, but there are several ways to accomplish something. Granted, sometimes it has to be hard to get and ugly, but one should always opt, when possible, for the simplest and cleanest sollution without all that 1337 kung-fu that you might find. Several times, upon re-reading my code that I haven't worked on for a while, I find myself trying to understand something with some degree of effort because it isn't well thought and well planned
Offline
I understand what you're saying, but there are several ways to accomplish something. Granted, sometimes it has to be hard to get and ugly, but one should always opt, when possible, for the simplest and cleanest sollution without all that 1337 kung-fu that you might find. Several times, upon re-reading my code that I haven't worked on for a while, I find myself trying to understand something with some degree of effort because it isn't well thought and well planned
Self-documenting code is an urban legend. Regardless of how simplistic your code is, there is *always* a need to comment.
Take this for instance:
int rgLevel = 17;
what? I see no kung-fu, but I do not know what "rgLevel" means, and why 17 is significant as an initialization value.
How about a real example (using Windows Media Encoder in C#, as I just did this like 5 hours ago):
WMEncoder encoder = new WMEncoder();
encoder.PrepareEncoding(false);
encoder.File.LocalFileName = "C:foo.wmv";
encoder.Start();
Sure this is clean and neat and makes perfect sense, but look at the PrepareEncoding call.... why does it take a "false" parameter. What does that do? For anyone who's read the docs, why is it explicitly called when Start() calls PrepareEncoding(true) anyway?
Anyone who claims "self-documenting" code can truely exists needs to show me an example...
Offline
The only real rule for commenting is "Explain what needs it." That's pretty vague, and requires good judgement. I like to think what would be necessary to explain what the program is doing in a general way as long as it doesn't duplicate the code's exact meaning. For example, this code from one of my projects doesn't need any explanation. Anything I write would be almost exactly the same.
void init_all () {
init_ncurses();
init_openal();
init_vals();
}
On the other hand, these #defines would be meaningless without the attached note.
// directions the player can travel in, for storing the same
#define PLAYER_U 0
#define PLAYER_R 1
#define PLAYER_D 2
#define PLAYER_L 3
#define PLAYER_S 4 // standing still
In general, if you can make a statement more meaningful with a comment, add that comment, but leave out anything completely obvious. Don't make a comment saying "add foo to bar" (for "bar += foo"), say why you do it.
Also, never mention variable names in a comment unless you're explaining what that name means, because then you have to update the comment too when it changes. And you *will* forget to do that, meaning that eventually you can't trust your comments and have to audit them all. Being more specific than you have to be is a bigger pain than being vague.
Offline
but look at the PrepareEncoding call.... why does it take a "false" parameter. What does that do?
Don't mean to get nit-picky here, since I agree with you, but... I don't know what this call does, but good self-documenting code wouldn't have that value there; its the equivalent of a magic number. It would be replaced with a constant defined somewhere else in a file, and the name of this constant should lend some information about the vaule and purpose of the call.
Offline
Write tiny abstract methods. When I use common lisp, I always try to keep my functions as close to one line as possible.
Always start with low-level functions, and then build higher-level functions that implement the lower-level functions, and so on.
And as everyone else is saying: write good comments.
Offline
How about a real example (using Windows Media Encoder in C#, as I just did this like 5 hours ago):
WMEncoder encoder = new WMEncoder(); encoder.PrepareEncoding(false); encoder.File.LocalFileName = "C:foo.wmv"; encoder.Start();
Sure this is clean and neat and makes perfect sense, but look at the PrepareEncoding call.... why does it take a "false" parameter. What does that do? For anyone who's read the docs, why is it explicitly called when Start() calls PrepareEncoding(true) anyway?
Anyone who claims "self-documenting" code can truely exists needs to show me an example...
You can't write self-documenting code if you use a bad library.
Self documenting code can exist but you'll need to use descriptive names, the right level of abstraction and some comments to explain clever algoritms.
For example:
function make_rational(numerator, denominator){
divisor = gcd(numerator, denominator);
return [numerator / divisor, denominator / divisor];
}
function numerator(rational){
return rational[0];
}
function denominator(rational){
return rational[1];
}
Is this code self-documenting, or should you add a comment "reduce to lowest terms":
function make_rational(numerator, denominator){
// reduce to lowest terms
divisor = gcd(numerator, denominator);
return [numerator / divisor, denominator / divisor];
}
You also need the right level of abstraction: we don't need to know how gcd() is implemented.
function gcd(a,b){
if(b == 0){
return a;
}else{
return gcd(b, a % b);
}
}
This code needs a comment if you want to understand how it works, but if you just believe that it computers the greatest common divisor it's ok.
Offline
You can't write self-documenting code if you use a bad library.
And you have stumbled upon the point. Dusty almost got it too, but you hit it on the nose. Many many times you _must_ use a "bad library". Now, I am not saying "omg this library sucks", I am saying that the library may be the greatest thing since sliced bread, just not up to the individual programmer's "high standadards" - it's al subjective.
Take a look at ffmpeg's libavcodec - it has to be, hands down, the de-facto a/v codec library. Problem is, the public interface sucks. That's my opinion. The point is, if you want to write "100% self documenting code", you either need to rewrite libavcodec before you do so, or use it and add a few comments.
Everyone has different standards, everyone has different styles. Hell, everyone uses different verbs (or nouns, if you're a java fan) and abbreviations. The point is, code that is self-documenting to _YOU_ will never ever ever ever ever be self-documenting as a whole.
Don't ever assume everyone thinks like you. Add comments. Assume people looking at your code are the biggest idiots ever and explain everything even quasi-complex. It helps, believe me.
Case in point:
Is this code self-documenting, or should you add a comment "reduce to lowest terms"
Lets assume I know nothing about math. Pretend I took "business math" in school and barely know how to "do fractions" (I know people like this). Your code is now officially the farthest thing from "self documenting" - it actually requires me to go and find *MORE* documentation about a topic I don't understand.
Offline