You are not logged in.
So after reading some of these posts and reflecting on the history of programming languages, I find myself wondering what the programming language of the future will be? As the human-computer interface evolves (touch interfaces, voice recognition, computer vision, etc) how will this influence the process of creating computer programs? Will programming become more graphical (e.g. flow charts, UML diagrams, etc)? Setting ego's aside, does it make sense for programming to be more graphical? Will we reach a point where we can essentially just sketch out a concept and let the compiler/interpreter handle the rest?
Perhaps this question is best left to Clark and Asimov, but they're dead. This leaves you guys.
(Moderator, I apologize in advance if this is out of scope for this forum.)
Last edited by bsilbaugh (2011-11-23 01:15:47)
- Good judgement comes from experience; experience comes from bad judgement. -- Mark Twain
- There's a remedy for everything but death. -- The wise fool, Sancho Panza
- The purpose of a system is what it does. -- Anthony Stafford Beer
Offline
The way I see it, there is a line between easy and granular. You can absolutely create a functional, usable program with a drag-and-drop interface (ask anyone in FRC about Labview). However, I think to get the level of control needed for more complex projects can only be achieved through straight-up code, the reason being that there are only so many "blocks", if you will, that a compiler can include for the user to draw from. At some point, the user loses some highly specific pieces. Naturally, you could argue that there could be libraries and libraries of these pre-built code "blocks", but at that point, how is it any better than text-based code? Is it really that much easier to drop an "if" block in there then type
if (TRUE)
{
}
I don't know. But in all honesty, I think it's like comparing apples to oranges. It's absolutely far easier and efficient to build android and iOS apps in gui-land, but just try and imagine building a kernel COMPLETELY out of code snippets. It would likely be twice as large as it needed to be and half as efficient. So, in short, I really suppose it depends on the situation.
*EDIT* Ick, typos.
Last edited by TheHebes (2011-11-23 02:42:39)
Laptops:
MSI GS60 Ghost
Asus Zenbook Pro UX501VW
Lenovo Thinkpad X120e
Offline
So after reading some of these posts and reflecting on the history of programming languages, I find myself wondering what the programming language of the future will be? As the human-computer interface evolves (touch interfaces, voice recognition, computer vision, etc) how will this influence the process of creating computer programs?
Why should these things influence programming languages at all? They're irrelevant. Stuff that will matter is system architecture, e.g. as large of numbers of processors become more common then languages that use more advanced threading constructs and minimize state will become attractive.
Will programming become more graphical (e.g. flow charts, UML diagrams, etc)? Setting ego's aside, does it make sense for programming to be more graphical? Will we reach a point where we can essentially just sketch out a concept and let the compiler/interpreter handle the rest?
The point about programming is that the code, in whatever form, must be completely unambiguous. Any visual language capable of such precision is going to extremely demanding. Visual languages don't scale well for such tasks - there are good reasons why no one writes in hieroglyphics any more.
Offline
... (ask anyone in FRC about Labview). ...
Personally, I hated Labview so much I just used C++ for our robots. For me, it was orders of magnitude easier. Maybe I'm just set in my ways, but I don't think I'd ever want to switch to any form of programming that uses something less efficient than a keyboard. Perhaps it will catch on for newer generations, or people who don't want to know how everything works.
Offline
TheHebes wrote:... (ask anyone in FRC about Labview). ...
Personally, I hated Labview so much I just used C++ for our robots. For me, it was orders of magnitude easier. Maybe I'm just set in my ways, but I don't think I'd ever want to switch to any form of programming that uses something less efficient than a keyboard. Perhaps it will catch on for newer generations, or people who don't want to know how everything works.
Absolutely agreed. Last year, my team used Labview and we loathed it. I felt that at least half the time we spent programming was used to develop convoluted workarounds just to get Labview to understand what we wanted it to do. C++ this year has been smooth sailing all the way.
Laptops:
MSI GS60 Ghost
Asus Zenbook Pro UX501VW
Lenovo Thinkpad X120e
Offline
So after reading some of these posts and reflecting on the history of programming languages, I find myself wondering what the programming language of the future will be? As the human-computer interface evolves (touch interfaces, voice recognition, computer vision, etc) how will this influence the process of creating computer programs? Will programming become more graphical (e.g. flow charts, UML diagrams, etc)? Setting ego's aside, does it make sense for programming to be more graphical? Will we reach a point where we can essentially just sketch out a concept and let the compiler/interpreter handle the rest?
The future is more Alien than Star Trek. C will reign supreme forever. Your standard users might enjoy prettier interfaces, but we the providers thereof will continue to disdain such artifice.
Offline
Graphical languages are normally used to get quick results where the efficiency and maintainability of the solution is irrelevant. I have many years of C++ experience and have tried Labview once - not an experience I want to repeat. Using "design patterns" transforms the software from lines of code to higher level concepts. Designing the software at this level requires maintaining it at this level. A round-trip development tool would really help - Are there any open source tools?
Offline
Since "writing software" becomes more and more a synonym for "developing a GUI", I can understand the wish to have better graphic tools. I once saw a concept video of an IDE that represented all the relevant things in bubbles (but I can't remember the name… a google roundtrip later I remembered the name: Code Bubbles -.-). It's not a graphical language, it's a very graphical IDE.
The only step away from writing code with your own hands would be an AI that understands flowcharts and spoken language. It would also need to understand natural patterns and interprete them fittingly.
Example: A graphical database query on a touchscreen without a keyboard.
Problem: You have a song in mind, something with "I have a red door and I humm humm humm humm humm. No colors anymore, I want them to humm humm".
Abstract solution: Show all entries in a music database in the timeframe between 1970 and 2010 relevant to the song fragments "I have a red door and I" and "No colors anymore I want them to", sorted by Artist, Year, Album and bitrate. You are not sure about your number of 'humms', because you're not sure about the melodie anymore.
Implementation:
1. You klick on the query definition bar and sort the fields by drag and drop to [Artist][Year][Album][bitrate]
2.I You click on the content specification field and start browsing an icon database sorted by fields of words, trying to symbolize your query.
2.II You click on the content specification field and add two parameter fields, your virtual keyboard pops up and you enter your text pieces.
3. The interface transforms your query into an SQL query and outputs your results.
What I described here is in fact a more versatile implementation of all those web browser activities we do all day. As we all know, the web is limited and basically not very good in providing (machine readable) information that was requested by a machine, due to most of the information there being human readable. If something as concievable as a search query is already too abstract to be handled well with a graphical solution, I wonder how something like an audio decoder could be contructed two-dimensionally.
Offline
jac wrote:TheHebes wrote:... (ask anyone in FRC about Labview). ...
Personally, I hated Labview so much I just used C++ for our robots. For me, it was orders of magnitude easier. Maybe I'm just set in my ways, but I don't think I'd ever want to switch to any form of programming that uses something less efficient than a keyboard. Perhaps it will catch on for newer generations, or people who don't want to know how everything works.
Absolutely agreed. Last year, my team used Labview and we loathed it. I felt that at least half the time we spent programming was used to develop convoluted workarounds just to get Labview to understand what we wanted it to do. C++ this year has been smooth sailing all the way.
Having used Labview in my student days, its obviously NOT for programmers who 'think' in C/related. It requires a new way of thinking about the process to be used effectively, ironically those classmates who were less exposed to text-based programming caught on much faster and produced much better results.
Allan-Volunteer on the (topic being discussed) mailn lists. You never get the people who matters attention on the forums.
jasonwryan-Installing Arch is a measure of your literacy. Maintaining Arch is a measure of your diligence. Contributing to Arch is a measure of your competence.
Griemak-Bleeding edge, not bleeding flat. Edge denotes falls will occur from time to time. Bring your own parachute.
Offline
A lot of interesting feedback. I certainly agree with the comments regarding lab view. I had to learn Lab view during an undergrad course I took, and I'm not particularly fond of it.
Nonetheless, with touch interfaces becoming more common (we've had digitizing tablets for years), I have a hard time believing that the next generation will be using keyboards. Also, reflect for a moment on the language of mathematics. I am unaware of any written language that is more expressive. That said, wouldn't it make sense to write or draw, as opposed to type, a program using a more abstract symbolic language than that afforded by our beloved QWERTY keyboards.
I will be willing to bet $100.00 (US) that programming with a keyboards goes the way of punch cards by 2025
(This of course assumes that we don't slip into a dark age caused by global economic recession, or blow ourselves up in a nuclear holocaust.)
- Good judgement comes from experience; experience comes from bad judgement. -- Mark Twain
- There's a remedy for everything but death. -- The wise fool, Sancho Panza
- The purpose of a system is what it does. -- Anthony Stafford Beer
Offline
Also, reflect for a moment on the language of mathematics. I am unaware of any written language that is more expressive. That said, wouldn't it make sense to write or draw, as opposed to type, a program using a more abstract symbolic language than that afforded by our beloved QWERTY keyboards.
Master Foo Discourses on the Graphical User Interface.
I can redefine what a superscript means any time I wish. However, even if we agree on a strict one-size-fits all definition for superscripts, what happens when the computer encounters someone with messy hand-writing?
I will be willing to bet $100.00 (US) that programming with a keyboards goes the way of punch cards by 2025
(This of course assumes that we don't slip into a dark age caused by global economic recession, or blow ourselves up a nuclear holocaust.)
I will take that bet ;-)
As the complexity of our computational infrastructure grows, it will become ever more important to strictly adhere to simple design philosophies, and the elite will be ever more distinguished by their ability to step outside the constraints of captive graphical user interfaces.
Offline
Having used Labview in my student days, its obviously NOT for programmers who 'think' in C/related. It requires a new way of thinking about the process to be used effectively, ironically those classmates who were less exposed to text-based programming caught on much faster and produced much better results.
Aha, evidence towards newer generations of programmers moving toward it! I admittedly didn't get very far into it as I was turned off almost immediately. I mean, having the choice of learning something that looks tedious or going with what you like and get more time to build/play with robots? There's no contest there for me.
My question is, when you say "produced much better results" do you mean compared to the people who already knew text based programming languages programming in a visual one? Or are you making a comparison between text and visual?
Offline
Will programming become more graphical (e.g. flow charts, UML diagrams, etc)? Setting ego's aside, does it make sense for programming to be more graphical?
I think it’s interesting to compare to digital circuit design. In the past circuits were designed graphically, but for efficiency reasons, the digital logic industry has largely moved to text‐based hardware description languages like VHDL or Verilog. Can you imagine doing a complex design like the UltraSPARC processor graphically instead of in Verilog?
ngoonee wrote:Having used Labview in my student days, its obviously NOT for programmers who 'think' in C/related. It requires a new way of thinking about the process to be used effectively, ironically those classmates who were less exposed to text-based programming caught on much faster and produced much better results.
Aha, evidence towards newer generations of programmers moving toward it!
Well, I wouldn’t call that a “new generation of programmers.” Like others here, I’ve been forced to use Labview at times (and personally hated it), but my friends who loved it tended to be hardware‐focused people who hated programming, and never willingly programmed before or after the class.
Offline
Paul Graham (Hacker News, YCombinator, Lisp, etc...) has already covered this idea very well.
http://www.paulgraham.com/hundred.html
Last edited by Nisstyre56 (2011-12-02 01:37:12)
In Zen they say: If something is boring after two minutes, try it for four. If still boring, try it for eight, sixteen, thirty-two, and so on. Eventually one discovers that it's not boring at all but very interesting.
~ John Cage
Offline
Programming needs to be keep tight distance from the machine abstractions to make sense. Programming on high-level abstractions would still need some dirty and complex programming underneath to work. That's why I whink that 'keyboard-programming' will still be there for some time.
I think of programming languages in the future as something elegant as Haskell, refined and concurrent as Erlang, practical as Python or Ruby and efficient and straightforward as C. Contemporary programming languages have teared down that wall between functional an imperative world.
Functional isn't anymore mean slow and bizarre, and imperative isn't always ugly and dirty.
Offline
gbrunoro - if you're interested in Erlang and Haskell, then the OO/functional hybrid to look at is probably Scala - uses CSP, strong support for functional programming, very sophisticated type system, runs on the JVM and interops with straight Java.
As for the "touch screen programming" meme... it makes about much sense as the tendency for designers to put "streamlined" fins and other idiocies on cars and toasters in the 50s. A touch screen interface is just irrelevant to the actual problems of programming. It makes sense for some user tasks, yes, but expecting users and programmers to use the same interface is as ridiculous as expecting a watch maker's lathe to work like an alarm clock. Interfaces need to fit the problem and solution domain.
Less intellectually, touchscreen monitors have been available for decades - and their are good reasons why mice are generally preferred - touchscreens are RSI factories in prolonged use - even more so than mice.
Offline
The future of programming will in my view shift more toward Domain Specific Languages (DSL). A good example is for instance Matlab, that is able to perform very complicated subroutines while still keeping the code very readable and specific to a particular domain (e.g. Mathematics, Engineering, Biology). The ultimate scenario is that there will be an age, very long from now, when we simply tell a computer what we want and it will go figure. Programming will become a redundant practice eventually.
Offline
The future of programming will in my view shift more toward Domain Specific Languages (DSL). A good example is for instance Matlab, that is able to perform very complicated subroutines while still keeping the code very readable and specific to a particular domain (e.g. Mathematics, Engineering, Biology).
While DSLs may have their niches, I disagree with the idea that they will come to predominate.
DSL is good for disciplines splintered off from each other. Such splintering can have value in that it permits specialists to more efficiently focus their energies. On the other hand, it can also lead to duplicated effort. For example, consider the medical doctor who recently "invented" Simpson's Method for numerical integration, and named it after himself. Are you sure biology should have its own DSL? - or maybe biologists and their ilk should learn a modicum of mathematics and computer science.
From a different angle, I'm reminded of Stephen J. Gould's argument that multicellular organisms actually make up but a fraction of a percentage of the biomass currently living on this planet. High level and domain-specific languages are like multicellular organisms, made up of, reliant upon, and greatly outweighed by the (mostly invisible) single celled organisms (i.e. software written in C, Lisp, *asm, and friends).
The ultimate scenario is that there will be an age, very long from now, when we simply tell a computer what we want and it will go figure. Programming will become a redundant practice eventually.
I have trouble seeing how you could substantiate that. Computers and brains are both bound by the laws of physics and computation. The only way a computer could match our adaptability is by employing heuristic algorithms that would make them as subject as we are to settling for local optima.
If you're thinking of quantum computers, it has not even yet been shown that they reside in a different computational class to classical machines.
Last edited by /dev/zero (2011-12-08 21:48:58)
Offline
geniuz wrote:The ultimate scenario is that there will be an age, very long from now, when we simply tell a computer what we want and it will go figure. Programming will become a redundant practice eventually.
I have trouble seeing how you could substantiate that. Computers and brains are both bound by the laws of physics and computation. The only way a computer could match our adaptability is by employing heuristic algorithms that would make them as subject as we are to settling for local optima.
If you're thinking of quantum computers, it has not even yet been shown that they reside in a different computational class to classical machines.
Intelligent computers? I for one welcome our new computer overlords
The past century is full of 'impossible' things which are now considered pretty routine. Hard physical limits cannot be broken, but may be bypassed (hence the reason we're able to fly tons of steel). No reason to think computers and/or intelligence limits are absolute.
Allan-Volunteer on the (topic being discussed) mailn lists. You never get the people who matters attention on the forums.
jasonwryan-Installing Arch is a measure of your literacy. Maintaining Arch is a measure of your diligence. Contributing to Arch is a measure of your competence.
Griemak-Bleeding edge, not bleeding flat. Edge denotes falls will occur from time to time. Bring your own parachute.
Offline
Intelligent computers? I for one welcome our new computer overlords
The past century is full of 'impossible' things which are now considered pretty routine. Hard physical limits cannot be broken, but may be bypassed (hence the reason we're able to fly tons of steel). No reason to think computers and/or intelligence limits are absolute.
I can also think of counter-examples such as the PKD story that has colonies on Mars by 1990.
Will our technology continue to improve? Yes. Will computers continue to seem more intelligent? Yes.
And probably we will one day have genuine AI - but if so, it will remain subject to the foibles that plague our own intelligence.
Offline
The future of programming languages according to Cyrus: there will be more of them, each claiming superiority over all of it's predecessors and peers because of some new function/library/feature. and there will always be small cult followings for older programming languages touting their superiority over all other languages for random and esoteric reasons. and of the above. new similar languages will branch out. and in the end, CS will be a much more annoying degree to achieve.
Hofstadter's Law:
It always takes longer than you expect, even when you take into account Hofstadter's Law.
Offline
While DSLs may have their niches, I disagree with the idea that they will come to predominate.
DSL is good for disciplines splintered off from each other. Such splintering can have value in that it permits specialists to more efficiently focus their energies. On the other hand, it can also lead to duplicated effort. For example, consider the medical doctor who recently "invented" Simpson's Method for numerical integration, and named it after himself. Are you sure biology should have its own DSL? - or maybe biologists and their ilk should learn a modicum of mathematics and computer science.
From a different angle, I'm reminded of Stephen J. Gould's argument that multicellular organisms actually make up but a fraction of a percentage of the biomass currently living on this planet. High level and domain-specific languages are like multicellular organisms, made up of, reliant upon, and greatly outweighed by the (mostly invisible) single celled organisms (i.e. software written in C, Lisp, *asm, and friends).
I am no Biologist, so I am in no position to judge whether Biologists ought to have working knowledge of CS, though I'm quite sure they're taught a fair share of Math. I simply listed the field because it was once brought to my attention that there exists a DSL for Biology (SBML), but perhaps a Biologist can shed more light on this discussion. Also, could you be more specific as to what you mean by "duplicate effort"? I fail to see how this is necessarily a bad thing. In Science, people often times derive things independently, its actually a great way of verification.
As to your analogy, I'm missing the point. Are you implying that complex systems that are made of up or dependent upon small and simple systems are inferior to the smaller systems simply because they are more complex and appear in smaller numbers? Wouldn't that make humans inferior to e.g. ants or fruit flies for that matter if you extend this analogy? Granted, I might be underestimating the complexity of these creatures, but I'm sure there is some validity in my statement.
I have trouble seeing how you could substantiate that. Computers and brains are both bound by the laws of physics and computation. The only way a computer could match our adaptability is by employing heuristic algorithms that would make them as subject as we are to settling for local optima.
If you're thinking of quantum computers, it has not even yet been shown that they reside in a different computational class to classical machines.
Mind you, metaheuristics is a very young field. Research into how things like genetic algorithms can be optimized for certain branches of problems has only yet been initiated. Nonetheless, I fail to see how their inability to locate global optima is necessarily a weakness. The point is that computers are computationally strong machines and these types of algorithms can find a wide range of solutions that might be satisfactory for any intended purpose at speeds no human will ever be able to accomplish.
This is why I also don't quite see how quantum computers are relevant to this discussion. Computers are already able to perform very complicated computations at very high speeds. All you really need in order to have a computer do what a human literally asks is intelligent voice recognition software, which as far as I know is already pretty advanced even in something like an iPhone. The way I see it, its just evolution...humans have evolved to use speech as their preferred means of communication, so if you can point out a specific "computational" obstacle that designers can impossibly overcome to have computers "understand" speech, ever, I would be glad to find out.
Offline
I am no Biologist, so I am in no position to judge whether Biologists ought to have working knowledge of CS, though I'm quite sure they're taught a fair share of Math. I simply listed the field because it was once brought to my attention that there exists a DSL for Biology (SBML), but perhaps a Biologist can shed more light on this discussion. Also, could you be more specific as to what you mean by "duplicate effort"? I fail to see how this is necessarily a bad thing. In Science, people often times derive things independently, its actually a great way of verification.
Some biologists do first year maths, and all doctors do. For many biology degrees, however, it's sufficient to have passed maths in the final year of high school. At least 50% of the first year of a standard biology degree will then consist biology and chemistry subjects; the rest remains up to individual preference. The percentage of biology/chemistry then increases with each year level. You should hear statisticians bitch about how bad the biomedical literature is ;-)
On duplicate effort, I gave an example. A more detailed explanation of that example appears here. Oops, it was Trapezoid Rule, not Simpson's Method - my bad.
To verify Trapezoid Rule, it suffices to investigate the literature. Scientific illiteracy is not a great strength in any scientist.
As to your analogy, I'm missing the point. Are you implying that complex systems that are made of up or dependent upon small and simple systems are inferior to the smaller systems simply because they are more complex and appear in smaller numbers? Wouldn't that make humans inferior to e.g. ants or fruit flies for that matter if you extend this analogy? Granted, I might be underestimating the complexity of these creatures, but I'm sure there is some validity in my statement.
I think it would be a bit strong to say that more complex systems are inferior to the simpler systems they rely on; however, I think it's a bit teleological to regard the more complex systems as inherently "better", in some objective sense. We need single-celled organisms, but they don't need us.
Mind you, metaheuristics is a very young field. Research into how things like genetic algorithms can be optimized for certain branches of problems has only yet been initiated. Nonetheless, I fail to see how their inability to locate global optima is necessarily a weakness. The point is that computers are computationally strong machines and these types of algorithms can find a wide range of solutions that might be satisfactory for any intended purpose at speeds no human will ever be able to accomplish.
You do realise how fast the human mind works, don't you? Computers might seem better, because we know how to give them recipes for solving simple, well-defined tasks. If we want them to be good at the same things as us, it will only be by equipping them with the same "flaws" that make us appear slow at times. You can't have your cake and eat it too.
This is why I also don't quite see how quantum computers are relevant to this discussion.
I was only pre-empting an argument I anticipated you raising. If you hadn't planned to raise it after all, then let's get on with our lives.
Computers are already able to perform very complicated computations at very high speeds. All you really need in order to have a computer do what a human literally asks is intelligent voice recognition software, which as far as I know is already pretty advanced even in something like an iPhone. The way I see it, its just evolution...humans have evolved to use speech as their preferred means of communication, so if you can point out a specific "computational" obstacle that designers can impossibly overcome to have computers "understand" speech, ever, I would be glad to find out.
I think you misunderstood me. I never said computers could never match us. Unless we lend credence to magical ideas that the human mind is somehow unconstrained by the laws of physics, then certainly anything we can do can also be implemented in some other medium such as an electronic device.
I would only like to contend that the future will not involve abandoning generic, low-level languages such as C. The computers of the future will not be all-knowing gods of perfect organisation and insight. I think anyone who thinks so should try watching less sci-fi and actually try their hand at programming an AI. That will really show, first-hand, what a difficult task it is and how far we still have to go before we get anywhere close to such fantasies.
Last edited by /dev/zero (2011-12-11 23:29:03)
Offline
Some biologists do first year maths, and all doctors do. For many biology degrees, however, it's sufficient to have passed maths in the final year of high school. At least 50% of the first year of a standard biology degree will then consist biology and chemistry subjects; the rest remains up to individual preference. The percentage of biology/chemistry then increases with each year level. You should hear statisticians bitch about how bad the biomedical literature is ;-)
On duplicate effort, I gave an example. A more detailed explanation of that example appears here. Oops, it was Trapezoid Rule, not Simpson's Method - my bad.
To verify Trapezoid Rule, it suffices to investigate the literature. Scientific illiteracy is not a great strength in any scientist.
I think the mistake of the scientist in question was not that he re-derived the Trapezoid Rule, but that he published it as if it was a new invention, while it existed and was already published in some form. That certainly could be prevented by reconciling to literature before publishing. In fact, I'd not even blame the scientist in question too much, rather the commission that approved his paper to be published in the first place.
Still, I think its a long stretch to compare the derivation of a Mathematical rule or method to the invention of a complete DSL. The whole point of a DSL is that it allows experts in a specific domain to focus their efforts on creatively solving problems directly related to their field of expertise. When specific DSL's are integrated in university curricula world-wide, I hardly think there will be much duplicate effort going on. Also, in the world of OSS, there are multiple tools that can perform the exact same job. Do you for instance consider the existence of both of the mail clients Mutt and Alpine as duplicate effort?
I think it would be a bit strong to say that more complex systems are inferior to the simpler systems they rely on; however, I think it's a bit teleological to regard the more complex systems as inherently "better", in some objective sense. We need single-celled organisms, but they don't need us.
I still don't quite see how this can be interpreted as a necessary weakness or argument against more complex systems. Sure, DSLs can be dependent upon lower level languages, but if they are considered to increase the effectiveness and efficiency of certain experts, what exactly stops them from becoming dominant and continually evolving?
You do realise how fast the human mind works, don't you? Computers might seem better, because we know how to give them recipes for solving simple, well-defined tasks. If we want them to be good at the same things as us, it will only be by equipping them with the same "flaws" that make us appear slow at times. You can't have your cake and eat it too.
I wasn't implying computers will ever be able to mimic the human brain, and I'm not even sure whether it is something we necessarily want to strive for. All I was saying is that computers have already become indispensable tools in virtually every scientific and engineering discipline. They are computationally strong machines able to solve numerically involved problems at rates no human can ever hope to accomplish. It is this very aspect that will continue to guarantee the succes of computers, not AI per se. Again, I believe computers will never (at least not while I'm alive) be able to truly independently mimic and outperform the human brain, especially when it comes to aspects like creativity, i.e. the very aspects of human intelligence scientists have not even been able to understand and quantify to this date. Hence, humans will always remain "in the loop" to a large extent.
I think you misunderstood me. I never said computers could never match us. Unless we lend credence to magical ideas that the human mind is somehow unconstrained by the laws of physics, then certainly anything we can do can also be implemented in some other medium such as an electronic device.
Don't forget that laws of physics are "laws" that have been defined and created by humankind for its own convenience. Even recently this year, practice has shown that a concept as fundamental as the speed of light might not be as accurate as it was so widely acknowledged by the scientific community. This however hasn't stopped mankind from using these fundamental "laws" to invent e.g. radio communication and electronic devices.
I would only like to contend that the future will not involve abandoning generic, low-level languages such as C. The computers of the future will not be all-knowing gods of perfect organisation and insight. I think anyone who thinks so should try watching less sci-fi and actually try their hand at programming an AI. That will really show, first-hand, what a difficult task it is and how far we still have to go before we get anywhere close to such fantasies.
I haven't said low-level languages will be abandoned completely, I think they will remain to serve their purpose as a base upon which higher level languages (like DSLs) are built. In that sense, I believe that the user base of these low level languages will become more limited to computer scientists, i.e. to the people responsible for "formulating suitable abstractions to design and model complex systems" (source). Having said that, I still don't see why it so farfetched that for the rest of the world, physically telling a computer what to do in their native tongue as opposed to typing it in some generic-text programming language, will become the de-facto standard. Hence, I will reformulate my statement by stating that programming as most people know it today will eventually become a redundant practice.
Last edited by geniuz (2011-12-12 11:48:28)
Offline
I think the mistake of the scientist in question was not that he re-derived the Trapezoid Rule, but that he published it as if it was a new invention, while it existed and was already published in some form. That certainly could be prevented by reconciling to literature before publishing. In fact, I'd not even blame the scientist in question too much, rather the commission that approved his paper to be published in the first place.
Sure, he didn't publish the paper in isolation - but the peers who reviewed it would have been people from the same or a related profession. My point is that this kind of error results from splintering of the disciplines. You seem to think that the splintering should be somehow undone or reversed at the level of the peer reviewer or the publisher - but non-specialists (or specialists in other areas) will not be invited to comment, because it will be assumed that they lack the qualifications to do so.
Still, I think its a long stretch to compare the derivation of a Mathematical rule or method to the invention of a complete DSL.
I think I didn't make it clear where I was going with that. My point was that splintering of specialisations from each other permitted this duplication of effort to take place. I see DSLs as a way to splinter disciplines from each other. Thus, the rise of DSLs would make it easier for duplicate effort to take place.
The whole point of a DSL is that it allows experts in a specific domain to focus their efforts on creatively solving problems directly related to their field of expertise. When specific DSL's are integrated in university curricula world-wide, I hardly think there will be much duplicate effort going on.
Sorry, but this seems a little naive to me. I can see why you might think DSLs would permit specialists to more efficiently focus on solving their particular problems, and I don't entirely disagree, but even if it's true, I don't think it's controversial to think this will lead to considerably more duplicate effort.
Also, in the world of OSS, there are multiple tools that can perform the exact same job. Do you for instance consider the existence of both of the mail clients Mutt and Alpine as duplicate effort?
I haven't used Alpine, but if it's exactly like Mutt, then sure, I would certainly say that's duplicate effort. The more people who use one single piece of software, the more bugs can be filed against that software.
Also, I think this is a false analogy from the start. Mutt and Alpine are both written in C. If we're talking about the connection between duplicate effort and DSLs, let's consider the fact that "communicating and storing computational models of biological processes" (from the Wikipedia page on SBML) could be better done in Lisp, Lex/Yacc, or with the Lemon C++ library, instead of coming up with some new way of using harmful XML.
I still don't quite see how this can be interpreted as a necessary weakness or argument against more complex systems. Sure, DSLs can be dependent upon lower level languages, but if they are considered to increase the effectiveness and efficiency of certain experts, what exactly stops them from becoming dominant and continually evolving?
The fact that higher and higher levels of specialisation are concomitant with smaller and smaller user bases who still need to communicate with experts in other groups. I'm not saying DSLs should never be used. I just think they are essentially self-limiting, and just as bacteria will long outlive us more complicated life-forms, so too will the lower-level languages long outlive more cumbersome DSLs.
I wasn't implying computers will ever be able to mimic the human brain, and I'm not even sure whether it is something we necessarily want to strive for. All I was saying is that computers have already become indispensable tools in virtually every scientific and engineering discipline. They are computationally strong machines able to solve numerically involved problems at rates no human can ever hope to accomplish. It is this very aspect that will continue to guarantee the succes of computers, not AI per se. Again, I believe computers will never (at least not while I'm alive) be able to truly independently mimic and outperform the human brain, especially when it comes to aspects like creativity, i.e. the very aspects of human intelligence scientists have not even been able to understand and quantify to this date. Hence, humans will always remain "in the loop" to a large extent.
This all seems reasonable.
Don't forget that laws of physics are "laws" that have been defined and created by humankind for its own convenience. Even recently this year, practice has shown that a concept as fundamental as the speed of light might not be as accurate as it was so widely acknowledged by the scientific community. This however hasn't stopped mankind from using these fundamental "laws" to invent e.g. radio communication and electronic devices.
I don't think you're disagreeing with me. I'm aware of the fragile nature of what we call the laws of physics - but like you say, they're good enough that we can do things with them. It looks like our knowledge of the speed of light breaks down on really large length scales, and it looks like our knowledge of gravity breaks down on really small length scales, but the brain is in the middle. In between, our models for how the physical world works are very accurate, and it is in this regime that the brain operates.
Also, you should note that most scientists regard those neutrino test results as residing within the bounds of experimental error, and therefore not strongly indicative that the neutrinos really did break light speed.
I haven't said low-level languages will be abandoned completely, I think they will remain to serve their purpose as a base upon which higher level languages (like DSLs) are built. In that sense, I believe that the user base of these low level languages will become more limited to computer scientists, i.e. to the people responsible for "formulating suitable abstractions to design and model complex systems" (source).
Seems reasonable.
Having said that, I still don't see why it so farfetched that for the rest of the world, physically telling a computer what to do in their native tongue as opposed to typing it in some generic-text programming language, will become the de-facto standard. Hence, I will reformulate my statement by stating that programming as most people know it today will eventually become a redundant practice.
I don't think a friendly human interface should be considered the same as programming. This thread, if we recall the OP, is about programming languages of the future, not user interfaces of the future. I certainly agree that user interfaces will become more intelligent and attractive. I only disagree that this will have any strong impact on how we do programming.
Last edited by /dev/zero (2011-12-12 19:20:59)
Offline