In the editor’s note in the print edition of ACM Communications, Moshe Vardi comments on the direction that scientific publishing is going.

Most businesses have sellers and buyers, but academic fields have authors, publishers, and libraries. The print media and the peer review process, has kept the quality from dropping, in his view. The last word of what gets published is no longer the say-so of editors in the field, or the peer review process, but the publisher. The publishers also have opened up the “vanity press” to authors. In turn, the “publish or perish” paradigm has led the authors to take advantage of the entry of “vanity press” ideas, albeit under different names.

The profit motive came up again as a negative in all this. Ho-hum. Again? Will we never learn?

One commenter, Andrew Adams, pointed out, the reader was left out of this discussion.

But we all know that real breakthroughs in science meet resistance as they are mistaken as low quality or quack ideas. But with the barrier to on-line publishing so low now that even a caveman can do it, those breakthroughs along with quack ideas have access to all the audience that can find it or that cares to read it.

But never fear. Bach and Beethoven still have an audience that appreciates that kind of quality and then there is easy-come easy-go entertainment for the masses.

The editor’s letter is titled “Predatory Scholarly Publishing“. He contrasts a “typical business”, that has sellers and buyers, with the “scholarly publishing business”, in which you have publishers and libraries as sellers and buyers, but also (1) authors, and (2) editors and reviewers. The editors and reviewers have been gatekeepers, who are paid or earn civic duty or scholarly prestige.

The thrust of the essay is to lament that vanity publishing and publishers themselves are making more decisions on what to publish. Anyone who wants to find an audience that is willing to read his stuff will find it.

There’s that “evil profit motive” again, bringing down quality. But it is also bringing down cookie-cutter conformity. Quality stuff is still “out there” along with bad stuff.

Take a paper that would gone into the trash before. The author pays his “registration fee” to get it published. Now, this paper has a better chance of getting published. Say, a good chance. But then, anybody who reads it or sees it has a chance to pick it apart.

The P versus NP episode was a turning point in my opinion in this whole discussion.  The “proof” was “published” via Internet, and it took a lightning-fast less than 24 hours to fall hard. So quality and lack of it get exposed more quickly than ever now. What’s the problem?

The problem seems to be that those who had the role of gatekeepers up to now, are afraid that the reader is now going to get lower-quality product. Their position as gatekeepers had a purpose, but now the terrain is shifting.

Quality will not lose its respect. Like one comment pointed out the readers and the audience is pretty much the same thing. The readership and the authorship that respects quality in its field will find each other.

In my opinion, this will widen the field for non-conformist advances, and new ideas are almost by definition non-conformist. The paradigm of how to publish science is shifting, and with it there may be a few, or many, other paradigm shifts.

Better to let knowledge grow like wildfire. Truly bad stuff will get hooted down. The theory that the moon is made of green cheese does not stand a chance.

And never fear, ACM publications, periodicals, will be kept in high esteem for its readership.

But why should people who pay for the end product, the consumer of the product, not have any say in the product? The world will go on.

In my opinion, it is a torture of history and common sense to see the profit motive as a net negative in the computing world anyway.

The profit motive has driven great advances in computing. The main reason computing academics gets funding is because it has proven itself as a fertile field of study that has yielded a great many fruits in terms of practical applications. The vacuum tube, the transistor, the microchip, programming languages, airliners, and all the rest paid for many of the research directly, and often indirectly. McDonnel-Douglass was one of the major sponsors of engineering studies at Washington University, for example.

What we do need in the United States and everywhere else is a better educated populace. The only way to get that is to have an independently educated populace. By independent I mean absolutely free of any political meddling, and any government funding is political meddling.

As you’re finding out with publishers in the academic field, the guy that pays the bills calls the shots. In Austrian economics, they express it as the concept that demand drives the market. This is the way it should be. Until now, the bottleneck of print resources created demand for those with a reputation for selecting what goes into it. Now it opens up.

For politically driven media like Time Magazine and the New York Times, this means that much of the readership that had the choices limited to them by a few dozen media companies, now have a much broader selection on-line, and the body politic is making good use of it. Editorial offerings that were not before available are now accessible over the Internet.

We the readers are making good use of it. Bypassing gatekeepers is coming in vogue.

Wait! Not bypassing gatekeepers, but the fact is, the readership is morphing into its own gatekeeper. New structures of respect and prestige are forming.

But we do need a more enlightened and educated body politic.  For that you need to move toward a demographic that recovers the intellectual acumen of the generation that had high school entrance exams from 1910 that would stump the Harvard grad of today and flunk more than a few MIT grads to boot.

With alternate publishing outlets, new sciences and technologies that have huge, very huge potential, getting more eager audience and more play.

For example, all the traditional publishing structure in all probability would have prevented Fleishman and Pons from publishing anything in 1989. They bypassed that and told the world about it instead, bypassing all the bottlenecks and informing the entire world, including physics labs and physics students everywhere that there was a new energy kid on the block worth checking out:  www.infinite-energy.com/resources/faq.html

Not being able to censor the news, the gatekeepers of publications and research money who did not want to see their careers and their expensive educations devalued, used the same means to debunk it. Unsuccessfully it turns out.

MIT has a billionaire hot-fusion program going, funded largely by government money. When they announced that they had failed to get the results that Fleishman and Pons got, one of their own faculty bolted, became a whistle-blower, said they did get promising results, and went off and founded the “New Energy Foundation”, to fund further research into this promising technology.

It is now getting more attention around the world, in fact, and Arthur C. Clarke has added his name to those calling on the US president and institutions around the world to invest more into this new, clean, potentially very cheap and very abundant energy source.

And what traditional academic publishing gatekeeper would have been able to give the time of day to Luis Cruz, the Honduran teen that invented “Eyeboard”, an inexpensive eye-tracking device that will help people with disabilities greatly.

Command and control don’t work in economic domains, why should it work in science or academics?

Suppression of the first advocate of the germ theory of disease resulted in a great many unnecessary deaths from infectious diseases because the medical profession refused to believe that invisible little animals were getting people sick. It also drove them to punish him for telling them the truth (“You’re killing people!”) and to have him put away in an asylum.

that got its first advocate sentenced by the quality-minded establishment to an insane asylum, blaming him for their deafness to the lives they were killing with their dirty hands, calling him crazy for saying so.

“We can do nothing against the truth, but for the truth”.

 

Advertisements

In big companies and big outsourcers it often happens that projects are always broken down into small pieces, measurable, how quick. Training in new technologies, new languages, takes bites out of budgets. How quick can you get this done, and forget about modernizing your code, it’s not budgeted.

I sympathize, why fix it if it ain’t broke, and you’ve been taking the bugs out for thirty years, and you don’t want to do that again, but think about that. Like you have a hundred programs that access several files, each file in almost all of them say, and when you have to expand the amount file you have to change 50 programs and recompile the other 50.

Brace yourselves, your dollar amount fields have a high probability of a need for expansion soon.

And what is your company going to do in twenty more years with a change to your OCL you run with STRS36PRC, after all the guys are gone that used to know what to do with a matching record indicator and all that?

On a recent iPro or MC-online article somebody mentioned a study that showed 60-something percent of developers’ time is now spent on maintenance. Tools that make maintenance easier are great, and there are some tool sellers that address that (like Databorough and Hawkeye). But I think that many companies would be well served by a well-planned, minimally disruptive, stepwise move toward modernizing their code base. Things like breaking up the monolithic order entry programs into small pieces, replacing the in-line routines in fifty programs that access the same customer information by external routines, and so on.

ROI? Sometimes real hard to calculate, but if you can cut down maintenance on the other end by even half, then what would you say?

 

IBM i

IBM i (Photo credit: Wikipedia)

I work every day on a mixed code bag, including code written in the 1980s in one of the oldest known programming languages. Really out-dated stuff, what’s known as RPG-II. It was designed for the IBM System/34 and was the main coding language used on the IBM System/36 as well.

HP and Wang made their own RPG compilers to offer a way to pull customers away from IBM, but they apparently mostly quit trying after IBM brought out the System/38 and then the AS/400, the business system that blew DEC’s rival system and HP’s special ‘midrange’ into history. We who have worked on these IBM midrange systems -IBM now calls it “midmarket”– never could understand why everybody didn’t want to go out immediately and move their business to the most stable and secure-able system available so we always blamed IBM for hiding it and hiding their advertising budget from it, presumably to protect their lucrative mainframe business.

I”ve been through major upgrades and conversions of software from one base to another.

But at a lower level than that, I’ve considered different factors relating to program changes. The code base that’s so old for example, could benefit from a major overhaul and conversion to a more modern code base that could cut down drastically on both code change costs (in hours) and reduce the program halts that haunt ancient code bases that have gone through patches, fixes, more patches, enhancements and modifications over longer than the lifetime of the younger developers among us.

Programmers are generally attracted to “refactoring” old code, but it’s a bad word to CIO’s and managers who imagine all the kinks that occur with implementing changes and new code.

But we can minimize the pain of damages. I’ve thought of some things that make life easier to code changers and projects in general.

(1) Gosh, guys, get a good code analyzer! They’re around, and yes, the IBM i world has them too. Maybe especially. On the IBM i, for example, there are some that drill down into your code and claim to come close to extracting your business rules, identifying unused “orphan” code, and even maybe suggesting possible improvements. They exist even for COBOL code, but believe that these are not quite as thorough or detailed as the ones for RPG, understandable, but they’re more than nothing.

(2) We can minimize the disruption of changes.

We can make changes by steps, unless it’s a major difference in business rules, no way around that. But for changes that fall short of that, you don’t have to change all the menus and programs at once, change them a bit at a time. That saves money and nerves for the users and provides a proving ground for the changes, and helps dampen user complaints, which usually come from users who get comfortable, often with good reason, doing their jobs a certain way and have a hard time with the new stuff.

Y2K is an example. The best change-overs were either completely invisible, or moved smoothly into four-digit years. That project was so good the world scratched their heads and wondered what the fuss was about, and they thought maybe we had pulled a big scam on them. There was even a line on one of the Star Trek shows (or was it a movie) where an predecessor to Captain Jane wondered aloud whether there was really a problem.

Of course we coders know we saved their behinds!

Tux, the Linux penguin

Tux, the Linux penguin (Photo credit: Wikipedia)

CRN story:
Court Ruling: No Copyright Violations In Oracle-Google Java Case:
http://tinyurl.com/6moo43r

Google won its case, but Oracle got the judge to say that it has copyright claims over its use.

Sun didn’t open source it fast enough. Oracle has a database on Linux, but Oracle is in my opinion, unless shown otherwise, is that they can be worse than Microsoft with claiming its turf out to whatever it touches.

Google supposedly used “non-copyrightableAPI’s to build its own API’s in Android, they said, and good for them.

But the rest of us don’t have much staff for exploring all that, so maybe we should stay with the pure stuff if we’re going to use open source.

English: Python logo Deutsch: Python Logo

English: Python logo (Photo credit: Wikipedia)

Mel Beckman has written an article that repeats a theme I’ve seen since the days when I first started writing code in the first widely used version of RPG in the 1970’s. There’s always a variant of RPG is dead, RPG is dying, RPG is fading away, and all that. But I always pay attention, because all things change in this world below on here on Earth. It seems like the main reason he says this is that, according to him, not much new code is being written in the newest version of RPG-IV and that most of the time spent doing RPG is on maintaining existing RPG code, as in fixes, enhancements, and the like.

Find it here: http://www.iprodeveloper.com/article/opinion/is-rpg-dead-699217?cpage=6#commentsAnchor

I’m glad iprodeveloper opens it articles up for comments. For web sites that open up for comments so readers can offer their reactions, I’m finding these days that the comment section gets at least as interesting as the article itself. Actually, the article plus the comments generally make good reading.

For example, Mel Beckman listed a lot of other languages as newer and more promising for writing new code. He gives examples he calls “more-modern” languages: “More-modern languages such as C (including C++ and C#), Java, JavaScript, Perl, PHP, Python, and Ruby (all of which run natively on IBM i)”.

Aaron Bartell pointed out that for a some of these you have to have multi-layered implementations to make them work, with the extra load of maintaining and configuring each layer. Someone else pointed out that training current staff (the ones that know your business) in new languages is not cheap either.

I’ll add my own observation that each layer of technology -hardware, third-party software, external servers and applications, and so on– comes a multiplier in maintenance load.

Jon Paris added his observation that for one of his recent classes he was teaching the newest RPG, RPG-IV, to programmers that code in C, C++, Java, and others. He also added that the Java programmers are delighted with the ease of performing some functions in RPG.

Well, one more thing. The current generation and latest versions of RPG, RPGLE or RPGIV, have incorporated great advances that utilize, or enable the use of new techniques and possibilities. And while it does not have object-oriented syntax, with the intelligent use of subprocedures and service programs, it does enable advantages that are generally associated with OO programming.

Here’s an alert to utility software providers: there just might be a market for a precompiler that does OO things, that for example expands an embedded OO syntax into RPG code for compiling, similar to how the 4GL’s are said to work.

There are some programmers where I work who do new coding in COBOL, too.. Nothing wrong with that for the purpose, depending. But more on that later, and on refactoring code in a future article.

 

Related articles

Google guilty of infringement in Oracle trial; future legal headaches loom:

http://arstechnica.com/tech-policy/news/2012/05/jury-rules-google-violated-copyright-law-google-moves-for-mistrial.ars

In what could be a major blow to Android, Google’s mobile operating system, a San Francisco jury issued a verdict today that the company broke copyright laws when it used Java APIs to design the system. The ruling is a partial victory for Oracle, which accused Google of violating copyright law.

But the jury couldn’t reach agreement on a second issue—whether Google had a valid “fair use” defense when it used the APIs. Google has asked for a mistrial based on the incomplete verdict, and that issue will be briefed later this week.

So there you go, it’s official now. Oracle has every intention of putting its Java toothpaste back in the tube, and it has a big lawyer staff to help do it. They are famous for taking open source territory and staking a claim in it and digging in with its proprietary claws, and making money every way it can, tooth and nail.

They just took Java out of the running. Sun released it to the open source world.

And by the way, software algorithms are “patentable”? This is as preposterous as any mathematical solution to any math problem, the maps to get to there from here on paper, or in your mind, or a thought experiment.

So these two giants have proven arrogant, and they’re going after each other, and they’re acting like the pie is limited.

Larry Ellison has said “Privacy is dead, get over it”. And Google’s guys has said anonymity is dead. Easy for them to say, darlings of Bill Clinton and Barack Obama and other entrenched establishment types. Anonymity is the defense of the poor guy against such power houses, and dictators.

 

Skype replaces P2P supernodes with Linux boxes hosted by Microsoft (updated):

http://arstechnica.com/business/news/2012/05/skype-replaces-p2p-supernodes-with-linux-boxes-hosted-by-microsoft.ars

Microsoft has drastically overhauled the network running its Skype voice-over-IP service, replacing peer-to-peer client machines with thousands of Linux boxes that have been hardened against the most common types of hack attacks, a security researcher said.

The change, which Immunity Security’s Kostya Kortchinsky said occurred about two months ago, represents a major departure from the design that has powered Skype for the past decade. Since its introduction in 2003, the network has consisted of “supernodes” made up of regular users who had sufficient bandwidth, processing power, and other system requirements to qualify. These supernodes then transferred data with other supernodes in a peer-to-peer fashion. At any given time, there were typically a little more than 48,000 clients that operated this way.

Kortchinsky’s analysis, which has not yet been confirmed by Microsoft, shows that Skype is now being powered by a little more than 10,000 supernodes that are all hosted by the company. It’s currently not possible for regular users to be promoted to supernode status. What’s more, the boxes are running a version of Linux using grsecurity, a collection of patches and configurations designed to make servers more resistant to attacks. In addition to hardening them to hacks, the Microsoft-hosted boxes are able to accommodate significantly more users. Supernodes under the old system typically handled about 800 end users, Kortchinsky said, whereas the newer ones host about 4,100 users and have a theoretical limit of as many as 100,000 users.

“It’s pretty good for security reasons because then you don’t rely on random people running random stuff on their machine,” Kortchinsky told Ars. “You just have something that’s centralized and secure.”

Kortchinsky discovered the Linux supernodes using a Skype probing technique he and colleague Fabrice Desclaux first demonstrated in 2006. (PDF versions of conference presentation slides are here and here.)

Kortchinsky’s discovery comes as Microsoft said it’s investigating recent demonstrations of an exploit that exposes the local and remote IP addresses of users who are logged in to the service. The attack reportedly relies on the open-source SkypeKit package.

…more…