From: Ray Dillinger
Subject: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <404CE8CE.7030422E@sonic.net>
I believe Lisp is a better programming language than those now commonly
in use. Its use should be an excellent competitive advantage.  I observe 
that many managers of existing companies don't want to hire people to 
program in Lisp.

This contradiction appears to be a business opportunity waiting to happen;
if we are right that Lisp is a competitive advantage, then we should start 
a business to use it, take the edge it offers, and outcompete companies 
that will not use it.

What application should we work on?  I figure, another way to ask that is, 
what code do we own that has potential commercial value?  Let's put our 
heads together and see what we can do.

My main interest being natural language, most of my code that has potential 
commercial value deals with language.  Here's what I've got:

    1) _VERY_ general parsing code, implementing a chomsky 
       type 1 parser.  Note that my parser cannot save you 
       from the inefficiencies of pathological languages 
       that take exponential time/space to parse, but given a 
       correct grammar and a body of good representative training 
       data, it can heuristically "learn" rules of applicability to
       save you from pathological underspecified grammars.  It has 
       various extensions to facilitate the writing of "good" or 
       fast grammars even on complex languages, and will save 
       success/deadend/discard information during parses to 
       allow later parses to take advantage of those extensions. 
       This is an experimental facility, obviously targeted at 
       natural language; I'm not familiar with any other parser 
       that does this. 

       While currently targeted at text, it works on structured 
       lisp data as its intermediate form, and could take structured 
       lisp data as its input directly.  In theory, it is possible to
       implement a full compiler as nothing more than a grammar for 
       this parser.

    2) Markov tree implementation and markov-tree based classifier; 
       This is a good cheap, efficient implementation for an 
       auto-learning classifier, currently geared toward recognizing
       text.  Note that both its accuracy and the amount of training 
       data required are exponential in terms of depth.  With depth 
       set to 2, it is 99%+ accurate at recognizing spam, sorting 
       emails into correct inboxes by subject, or classifying documents 
       longer than a paragraph or so as to topic.  With depth set to 3, 
       it can reliably (95%+) distinguish much more subtle tasks, such 
       as identifying different authors of the same period and general 
       style, on texts of a page or more, or identify input questions 
       reliably enough for an automated question answering system. 
       With depth set to 4, it becomes impractical to use unless truly 
       vast amounts of training data (and vast amounts of CPU time for 
       training it) are available. 

       The classifier operates on lisp data: while it currently 
       is geared to strings, it is capable of handling structured 
       data, and could be used to classify, eg, code according to 
       its subject matter or the coding style of its author.

   3)  Miscellaneous well-developed libraries, written in scheme and 
       my own bizarre dialect (see 4) but easily portable to other 
       lisps:  

        3a) binary trees.
        3b) balanced realtime binary trees
        3c) Markov trees (in terms of binary trees)
        3d) Markov trees (in terms of realtime hash tables)
        3e) realtime hash tables (realtime constraints guaranteed
            by code that distributes any needed table copying over 
            multiple calls)

    4) About 3/4 of a lisp compiler...  it works, translating a lisp 
       dialect which is neither scheme nor common lisp into C, but it 
       does it damned slowly, and is file based, not interactive.  
       It has only two really interesting features: its character library
       (a character is a valid unicode combining sequence, not a 
       codepoint) and its macrology, which allows "macros" which are 
       both mutable at runtime (though this is inefficient as it triggers 
       JIT recompilation of routines that use them) and applicable as 
       though they were functions.  It has a single top-level namespace, 
       hierarchical sub-namespaces attached to variables/symbols in that 
       namespace, guaranteed space-safe tail recursion, and (new addition 
       this week) scheme's call/cc.  

What have other people got?  What can we stir together in a pot and make
into a business?

				Bear

From: Robert Bruce Carleton
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <104qcah4jvqhmd6@corp.supernews.com>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

One obvious application is spam filtering.  I remember Paul Graham having
some filter examples in lisp.  There's certainly demand for a solution.

Ray Dillinger wrote:

> 
> 
> I believe Lisp is a better programming language than those now commonly
> in use. Its use should be an excellent competitive advantage.  I observe
> that many managers of existing companies don't want to hire people to
> program in Lisp.
> 
> This contradiction appears to be a business opportunity waiting to happen;
> if we are right that Lisp is a competitive advantage, then we should start
> a business to use it, take the edge it offers, and outcompete companies
> that will not use it.
> 
> What application should we work on?  I figure, another way to ask that is,
> what code do we own that has potential commercial value?  Let's put our
> heads together and see what we can do.
> 
> My main interest being natural language, most of my code that has
> potential
> commercial value deals with language.  Here's what I've got:
> 
>     1) _VERY_ general parsing code, implementing a chomsky
>        type 1 parser.  Note that my parser cannot save you
>        from the inefficiencies of pathological languages
>        that take exponential time/space to parse, but given a
>        correct grammar and a body of good representative training
>        data, it can heuristically "learn" rules of applicability to
>        save you from pathological underspecified grammars.  It has
>        various extensions to facilitate the writing of "good" or
>        fast grammars even on complex languages, and will save
>        success/deadend/discard information during parses to
>        allow later parses to take advantage of those extensions.
>        This is an experimental facility, obviously targeted at
>        natural language; I'm not familiar with any other parser
>        that does this.
> 
>        While currently targeted at text, it works on structured
>        lisp data as its intermediate form, and could take structured
>        lisp data as its input directly.  In theory, it is possible to
>        implement a full compiler as nothing more than a grammar for
>        this parser.
> 
>     2) Markov tree implementation and markov-tree based classifier;
>        This is a good cheap, efficient implementation for an
>        auto-learning classifier, currently geared toward recognizing
>        text.  Note that both its accuracy and the amount of training
>        data required are exponential in terms of depth.  With depth
>        set to 2, it is 99%+ accurate at recognizing spam, sorting
>        emails into correct inboxes by subject, or classifying documents
>        longer than a paragraph or so as to topic.  With depth set to 3,
>        it can reliably (95%+) distinguish much more subtle tasks, such
>        as identifying different authors of the same period and general
>        style, on texts of a page or more, or identify input questions
>        reliably enough for an automated question answering system.
>        With depth set to 4, it becomes impractical to use unless truly
>        vast amounts of training data (and vast amounts of CPU time for
>        training it) are available.
> 
>        The classifier operates on lisp data: while it currently
>        is geared to strings, it is capable of handling structured
>        data, and could be used to classify, eg, code according to
>        its subject matter or the coding style of its author.
> 
>    3)  Miscellaneous well-developed libraries, written in scheme and
>        my own bizarre dialect (see 4) but easily portable to other
>        lisps:
> 
>         3a) binary trees.
>         3b) balanced realtime binary trees
>         3c) Markov trees (in terms of binary trees)
>         3d) Markov trees (in terms of realtime hash tables)
>         3e) realtime hash tables (realtime constraints guaranteed
>             by code that distributes any needed table copying over
>             multiple calls)
> 
>     4) About 3/4 of a lisp compiler...  it works, translating a lisp
>        dialect which is neither scheme nor common lisp into C, but it
>        does it damned slowly, and is file based, not interactive.
>        It has only two really interesting features: its character library
>        (a character is a valid unicode combining sequence, not a
>        codepoint) and its macrology, which allows "macros" which are
>        both mutable at runtime (though this is inefficient as it triggers
>        JIT recompilation of routines that use them) and applicable as
>        though they were functions.  It has a single top-level namespace,
>        hierarchical sub-namespaces attached to variables/symbols in that
>        namespace, guaranteed space-safe tail recursion, and (new addition
>        this week) scheme's call/cc.
> 
> What have other people got?  What can we stir together in a pot and make
> into a business?
> 
> Bear

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (FreeBSD)

iD8DBQFATTFFJyirwqhxBMYRAiNOAJ9Yp8BkqBBp359nAM7WqRXppzQjnQCglAXT
eG1A88qBkSgeCKsxXt7oxpU=
=pYLi
-----END PGP SIGNATURE-----
From: Ray Dillinger
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <404FFEE4.E12E3FA3@sonic.net>
Robert Bruce Carleton wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> One obvious application is spam filtering.  I remember Paul Graham having
> some filter examples in lisp.  There's certainly demand for a solution.
> 

Yeah, I was thinking of a spamfilter when I wrote the Markov Tree 
code - but there are free spamfilters now that use the same technology.
Honestly, Graham's 'plan for spam,' and other free implementations based
on the same basic idea or variations on it, have pretty much stomped 
that market. 

The only thing left to do with spamfilters that might still be profitable
is fix them so idiots can and will use them.  That means a one-click, 
zero-decision install and default configuration that will do what people 
who haven't thought much about it think they want.  And it means working, 
reliably, even if the user is using Outlook or some other rude mail 
client that ignores SMTP standards. 

Aside from straightup filtering, there are companies that have been trying - 
and failing - to make a profit by putting some kind of trust certificate 
in email for years.  This market looks like scorched earth to me; there 
are lots of suppliers, and they're free and getting easier to use.  No 
sustainable profit will be made with a commercial offering. 

				Bear
From: Ng Pheng Siong
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <c2kn3p$70b$1@reader01.singnet.com.sg>
According to Ray Dillinger  <····@sonic.net>:
> What application should we work on?  

Check out James Robertson's blog; he's the product manager for Cincom
Smalltalk. Periodically he mentions applications deployed by his customers.

The key word is "vertical." Here's a recent example:

    American Nuclear Insurers has been developing a document management
    system using VisualAge Smalltalk. ANI's digital archive is designed to
    support both availability of business critical documents in the event
    of lost or damaged paper documents and broader access to all documents
    by off site staff. ANI reviewed commercially available document
    management systems, but they were prohibitively expensive and required
    a significant amount of effort to prepare and index documents upon
    submission.


-- 
Ng Pheng Siong <····@netmemetic.com> 

http://firewall.rulemaker.net -+- Firewall Change Management & Version Control
http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL
From: Ray Dillinger
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <405003B5.A57A7CA@sonic.net>
Ng Pheng Siong wrote:
> 
> According to Ray Dillinger  <····@sonic.net>:
> > What application should we work on?
> 
> Check out James Robertson's blog; he's the product manager for Cincom
> Smalltalk. Periodically he mentions applications deployed by his customers.
> 
> The key word is "vertical." Here's a recent example:
> 
>     American Nuclear Insurers has been developing a document management
>     system using VisualAge Smalltalk. ANI's digital archive is designed to
>     support both availability of business critical documents in the event
>     of lost or damaged paper documents and broader access to all documents
>     by off site staff. ANI reviewed commercially available document
>     management systems, but they were prohibitively expensive and required
>     a significant amount of effort to prepare and index documents upon
>     submission.
> 

Yes.  A proper vertical.  A proper vertical is a group of fairly 
specialized people who have a severe need not served by free or cheap 
software and money to spend on a solution.  Nuclear Insurers are an 
excellent example; they need things from a document management system 
that nobody else needs, and they are willing to pay because they are 
on the hook for literally billions of dollars if the system fails them.  

Hmmm.  Other verticals:  Building and construction trades, Intelligent 
building control systems (reclaiming heat/light/water), large-scale 
architecture, Medical billing, Investment analysis and news tracking, 
Legal records and searches?  There are of course huge vertical markets
for information-warfare and covert-surveillence stuff, but I'd rather 
not be in that business.

Hmmm.  High-reliability software might be a good under-served niche 
applicable to many mission-critical verticals. The "standard industry
practice" has been to write shrink-wrap licenses that disclaim all 
responsibility for software malfunction or crashes.  If we can provide 
a number of small applications that are guaranteed, and financially 
insure our clients against loss from crashes or unhandled exceptions, 
we might be able to provide real business value.  Of course we couldn't 
insure anybody running windows, but that's a small barrier when you 
have people who seriously need reliable systems.  Lisp is a much better 
language than C/etc for writing stuff and guaranteeing it against 
buffer overruns, numeric overflows, etc. With a bit of macrology, you 
can be absolutely sure that every exception is handled.

Food for thought. 

				Bear
From: Paul Wallich
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <c2q182$3ua$1@reader2.panix.com>
Ray Dillinger wrote:

> Hmmm.  Other verticals:  Building and construction trades, Intelligent 
> building control systems (reclaiming heat/light/water), large-scale 
> architecture, Medical billing, Investment analysis and news tracking, 
> Legal records and searches?  There are of course huge vertical markets
> for information-warfare and covert-surveillence stuff, but I'd rather 
> not be in that business.
> 
> Hmmm.  High-reliability software might be a good under-served niche 
> applicable to many mission-critical verticals. The "standard industry
> practice" has been to write shrink-wrap licenses that disclaim all 
> responsibility for software malfunction or crashes.  If we can provide 
> a number of small applications that are guaranteed, and financially 
> insure our clients against loss from crashes or unhandled exceptions, 
> we might be able to provide real business value.  Of course we couldn't 
> insure anybody running windows, but that's a small barrier when you 
> have people who seriously need reliable systems.  Lisp is a much better 
> language than C/etc for writing stuff and guaranteeing it against 
> buffer overruns, numeric overflows, etc. With a bit of macrology, you 
> can be absolutely sure that every exception is handled.

In addition to high reliability, you might also want to go for vertical 
markets where you can use Lisp's productivity advantages. Figure out how 
big a problem you can solve or customize in 10/50/100/500/etc hours of 
coding, and target small and medium-sized businesses that have 
big-business organizational/logistical/management problems to deal with. 
The crucial thing about those markets is that they're perfectly willing 
to pay according to the added profit you make for them rather than what 
it "cost" you to produce the software. And by starting at the low end, 
you may be able to bypass some of the marketing/political issues that 
will attend bigger buys.

paul
From: mikel
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <uBp4c.37429$gA5.1220@newssvr25.news.prodigy.com>
Ray Dillinger wrote:

> Ng Pheng Siong wrote:
> 
>>According to Ray Dillinger  <····@sonic.net>:
>>
>>>What application should we work on?
>>
>>Check out James Robertson's blog; he's the product manager for Cincom
>>Smalltalk. Periodically he mentions applications deployed by his customers.
>>
>>The key word is "vertical." Here's a recent example:
>>
>>    American Nuclear Insurers has been developing a document management
>>    system using VisualAge Smalltalk. ANI's digital archive is designed to
>>    support both availability of business critical documents in the event
>>    of lost or damaged paper documents and broader access to all documents
>>    by off site staff. ANI reviewed commercially available document
>>    management systems, but they were prohibitively expensive and required
>>    a significant amount of effort to prepare and index documents upon
>>    submission.
>>
> 
> 
> Yes.  A proper vertical.  A proper vertical is a group of fairly 
> specialized people who have a severe need not served by free or cheap 
> software and money to spend on a solution.  Nuclear Insurers are an 
> excellent example; they need things from a document management system 
> that nobody else needs, and they are willing to pay because they are 
> on the hook for literally billions of dollars if the system fails them.  
> 
> Hmmm.  Other verticals:  Building and construction trades, Intelligent 
> building control systems (reclaiming heat/light/water), large-scale 
> architecture, Medical billing, Investment analysis and news tracking, 
> Legal records and searches?  There are of course huge vertical markets
> for information-warfare and covert-surveillence stuff, but I'd rather 
> not be in that business.
> 
> Hmmm.  High-reliability software might be a good under-served niche 
> applicable to many mission-critical verticals. The "standard industry
> practice" has been to write shrink-wrap licenses that disclaim all 
> responsibility for software malfunction or crashes.  If we can provide 
> a number of small applications that are guaranteed, and financially 
> insure our clients against loss from crashes or unhandled exceptions, 
> we might be able to provide real business value.  Of course we couldn't 
> insure anybody running windows, but that's a small barrier when you 
> have people who seriously need reliable systems.  Lisp is a much better 
> language than C/etc for writing stuff and guaranteeing it against 
> buffer overruns, numeric overflows, etc. With a bit of macrology, you 
> can be absolutely sure that every exception is handled.
> 
> Food for thought. 


It struck me today that there are some instructive things about my 
present circumstances. I'm working at a startup that responded to the 
bursting of the Internet bubble by completely reinventing itself as an 
entirely new business in the course of about 6 months. I mean, it was a 
startup to begin with, and then sort of rebooted itself and started up 
all over again, successfully. (Yes, that means I'm not looking to leave; 
on the other hand, I'm always interested in Lisp projects).

We have been very small and very high-involvement, like most startups; 
we all get a lot of exposure to every aspect of how to make things work. 
There's some useful information to be gleaned, I think, for anyone 
dreaming of creating a Lisp-based startup.

First, some encouragement: there are plenty of opportunities for people 
to start up by being able to quickly build technology that is responsive 
to market needs. Lisp is an advantage here. There are also plenty of 
markets in which the customers are not going to care whether the product 
is written in Java or C or SNOBOL, as long as it works the way they want 
it to. (Convincing investors that Lisp is a good idea is an entirely 
different matter, of course).

The business we've built does in fact run into all sorts of problems 
that could be solved more easily using a suitably-chosen and -used Lisp 
development platform. (We don't use Lisp and never will; the reasons why 
are not interesting ones for this discussion).

I get to see a lot of business opportunities for companies that build 
what are essentially black boxes that you stick on a network to do 
something or other for businesses that are trying to move more of their 
communications and data management onto solutions that use the Internet. 
They have lots of problems that they would like to solve by sticking 
reliable boxes between nodes in their infrastructure. If you can supply 
such boxes, and they can pass the rather grueling tests that customers 
subject them to, then you can get the customers' attention. Even better, 
once you have their attention, if you can respond very quickly to 
customer issues, if you can quickly adapt to changing requirements, then 
you can win competitions for six and seven figure sales cotnracts (yes, 
I've seen this happen repeatedly, and the responsiveness of the vendor 
has repeatedly proven to be the deciding factor). Lisp wins here, too. 
It's very easy for me to imagine quickly building competitive products 
that are basically linux boxes configured to run SBCL or CMUCL 
effectively, and also more or less configured to discourage anyone from 
peeking inside them.

Second, the caveats: Being able to build something cool is not good 
enough. Neither is fast performance. Both are important, but the human 
committment and real ability to make actual individuals happy at firms 
that are potential customers is more important than either. Doing what 
you say you will do--promising delivery dates and consistently meeting 
or beating them, for example--wins big. Convincing potential customers 
that you will work ten times as hard as the nearest competitor to please 
the customers wins sales. Version management, customer support, and 
documentation quickly become vitally important.

Also it's not really very important to have a lot of code that works 
when you start. After I went through the exercise of inventorying my 
code, I remembered that when we rebooted our startup we had *no* code 
for the new business; zero, zip, nada. What was important was that we 
were able to organize ourselves to figure out something we could 
probably sell better than anyone else, and make that quickly. Once that 
worked, the next thing we needed to be able to do was listen to what 
customers and potential customers were asking for and change what we 
were doing right away to accomodate them, even if if meant gross 
reorganization of our priorities.

All of these requirements *should* be things that are easier to do with 
Lisp and a group of accomplished Lisp programmers, but that group has to 
want to think that way.
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkptblatdr.fsf@thar.lbl.gov>
A great idea, Ray!

I don't have massive, polished Lisp libraries under my belt (the Young
Lisper that I am), but I've done a fair amount of Scheme'ing in my
research in computational biology.

In any case, count me in. :-) I'm always looking for new opportunities
& fun jobs.

~Tomer


Ray Dillinger <····@sonic.net> writes:

> I believe Lisp is a better programming language than those now commonly
> in use. Its use should be an excellent competitive advantage.  I observe 
> that many managers of existing companies don't want to hire people to 
> program in Lisp.
>
> This contradiction appears to be a business opportunity waiting to happen;
> if we are right that Lisp is a competitive advantage, then we should start 
> a business to use it, take the edge it offers, and outcompete companies 
> that will not use it.
>
> What application should we work on?  I figure, another way to ask that is, 
> what code do we own that has potential commercial value?  Let's put our 
> heads together and see what we can do.
>
> My main interest being natural language, most of my code that has potential 
> commercial value deals with language.  Here's what I've got:
>
>     1) _VERY_ general parsing code, implementing a chomsky 
>        type 1 parser.  Note that my parser cannot save you 
>        from the inefficiencies of pathological languages 
>        that take exponential time/space to parse, but given a 
>        correct grammar and a body of good representative training 
>        data, it can heuristically "learn" rules of applicability to
>        save you from pathological underspecified grammars.  It has 
>        various extensions to facilitate the writing of "good" or 
>        fast grammars even on complex languages, and will save 
>        success/deadend/discard information during parses to 
>        allow later parses to take advantage of those extensions. 
>        This is an experimental facility, obviously targeted at 
>        natural language; I'm not familiar with any other parser 
>        that does this. 
>
>        While currently targeted at text, it works on structured 
>        lisp data as its intermediate form, and could take structured 
>        lisp data as its input directly.  In theory, it is possible to
>        implement a full compiler as nothing more than a grammar for 
>        this parser.
>
>     2) Markov tree implementation and markov-tree based classifier; 
>        This is a good cheap, efficient implementation for an 
>        auto-learning classifier, currently geared toward recognizing
>        text.  Note that both its accuracy and the amount of training 
>        data required are exponential in terms of depth.  With depth 
>        set to 2, it is 99%+ accurate at recognizing spam, sorting 
>        emails into correct inboxes by subject, or classifying documents 
>        longer than a paragraph or so as to topic.  With depth set to 3, 
>        it can reliably (95%+) distinguish much more subtle tasks, such 
>        as identifying different authors of the same period and general 
>        style, on texts of a page or more, or identify input questions 
>        reliably enough for an automated question answering system. 
>        With depth set to 4, it becomes impractical to use unless truly 
>        vast amounts of training data (and vast amounts of CPU time for 
>        training it) are available. 
>
>        The classifier operates on lisp data: while it currently 
>        is geared to strings, it is capable of handling structured 
>        data, and could be used to classify, eg, code according to 
>        its subject matter or the coding style of its author.
>
>    3)  Miscellaneous well-developed libraries, written in scheme and 
>        my own bizarre dialect (see 4) but easily portable to other 
>        lisps:  
>
>         3a) binary trees.
>         3b) balanced realtime binary trees
>         3c) Markov trees (in terms of binary trees)
>         3d) Markov trees (in terms of realtime hash tables)
>         3e) realtime hash tables (realtime constraints guaranteed
>             by code that distributes any needed table copying over 
>             multiple calls)
>
>     4) About 3/4 of a lisp compiler...  it works, translating a lisp 
>        dialect which is neither scheme nor common lisp into C, but it 
>        does it damned slowly, and is file based, not interactive.  
>        It has only two really interesting features: its character library
>        (a character is a valid unicode combining sequence, not a 
>        codepoint) and its macrology, which allows "macros" which are 
>        both mutable at runtime (though this is inefficient as it triggers 
>        JIT recompilation of routines that use them) and applicable as 
>        though they were functions.  It has a single top-level namespace, 
>        hierarchical sub-namespaces attached to variables/symbols in that 
>        namespace, guaranteed space-safe tail recursion, and (new addition 
>        this week) scheme's call/cc.  
>
> What have other people got?  What can we stir together in a pot and make
> into a business?
>
> 				Bear

-- 
()
From: mikel
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <%0H3c.8132$Ek.5825@newssvr27.news.prodigy.com>
Ray Dillinger wrote:
> 
> I believe Lisp is a better programming language than those now commonly
> in use. Its use should be an excellent competitive advantage.  I observe 
> that many managers of existing companies don't want to hire people to 
> program in Lisp.
> 
> This contradiction appears to be a business opportunity waiting to happen;
> if we are right that Lisp is a competitive advantage, then we should start 
> a business to use it, take the edge it offers, and outcompete companies 
> that will not use it.
> 
> What application should we work on?  I figure, another way to ask that is, 
> what code do we own that has potential commercial value?  Let's put our 
> heads together and see what we can do.
> 
> My main interest being natural language, most of my code that has potential 
> commercial value deals with language.  Here's what I've got:

[...list of useful code snipped...]

> What have other people got?  What can we stir together in a pot and make
> into a business?

My previous post on this subject seems to have fallen in the well. 
Hence, I repeat:

- A Mac OS X IDE for Lisp

- A WYSIWIG prose-oriented text editor with Emacs-like API

- a few compilers and interpreters for various versions of a lisp-like 
language, some with built-in pattern-matching and Waters-style series; 
various syntaxes, some using Zebu-generated parsers

- large subsets of two multiplayer networked games, both in development

- a bitmap class for use with Corman Lisp on Windows

- various utilities and libraries developed in support of other people's 
projects (McCLIM, SK8, a never-released lisp operating system developed 
at Apple, etc.)
From: Ray Dillinger
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <405006F5.F940AD8B@sonic.net>
mikel wrote:
> 
> Ray Dillinger wrote:
> >
> > I believe Lisp is a better programming language than those now commonly
> > in use. Its use should be an excellent competitive advantage.  I observe
> > that many managers of existing companies don't want to hire people to
> > program in Lisp.
> >
> > This contradiction appears to be a business opportunity waiting to happen;
> > if we are right that Lisp is a competitive advantage, then we should start
> > a business to use it, take the edge it offers, and outcompete companies
> > that will not use it.
> >
> > What application should we work on?  I figure, another way to ask that is,
> > what code do we own that has potential commercial value?  Let's put our
> > heads together and see what we can do.
> >
> > My main interest being natural language, most of my code that has potential
> > commercial value deals with language.  Here's what I've got:
> 
> [...list of useful code snipped...]
> 
> > What have other people got?  What can we stir together in a pot and make
> > into a business?
> 
> - A Mac OS X IDE for Lisp
> 
> - A WYSIWIG prose-oriented text editor with Emacs-like API
> 
> - a few compilers and interpreters for various versions of a lisp-like
> language, some with built-in pattern-matching and Waters-style series;

> - large subsets of two multiplayer networked games, both in development
> 
> - a bitmap class for use with Corman Lisp on Windows
> 
> - various utilities and libraries developed in support of other people's
> projects (McCLIM, SK8, a never-released lisp operating system developed
> at Apple, etc.)

Okay...  useful stuff for developers, it sounds like.  That's good, but 
only if we can use it to develop a product.  Development tools themselves 
are not a product you can make money in anymore, I don't think; GNU 
floods the market from one end with good free stuff, partly because code 
artists have produced it "because it wasn't there" and partly in order 
to fight the monopolist. Microsoft floods the market from the other end 
with "loss leader" dev tools intended to lock people into their operating 
systems.  Neither of them cares about making any money off their dev 
tools, and we'd have to step into the middle of the fight and compete 
with them both while having to care.  That's a losing game.  

Speaking of games, Hmmm.  The massively-multiplayer online game industry
is possible - but there are a whole lot of heavy hitters there already. 
Still, the heavy hitters are all there to make a profit, which means it's 
an infinitely better market than development tools.  Can such a game be 
launched any more with less than a million dollars to prime the hype 
engine?

				Bear
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkptbj84ai.fsf@thar.lbl.gov>
Well, here's a "vertical/horizontal" that I'm surprised hasn't been
exploited:

Scheme/Lisp has fantastic (meta)linguistic capabilities. One thing
that I could imagine would be taking VHLLs which are common in
industry, and building compilers for them by first 'compiling' them to
Lisp/Scheme. 

When I see how addicted bioinformaticists are to Perl, for example, I
see a gold mine of opportunity. If you could offer them a compiler for
a VHLL language that they use, to speed up the Gigabytes of genetic
data that they spend *weeks* chugging through with Perl, I'm confident
that they'd use that compiler. 

Just my $0.02,

~Tomer

P.S.- I think that a package for Scsh, which gives the user the
ability to not use parens (like the "Sugar" package), would be a great
replacement for Perl, IMNSHO ( I know, I know, in my dreams... ).

http://redhog.org/Projects/Programming/Current/Sugar/


Ray Dillinger <····@sonic.net> writes:

> Okay...  useful stuff for developers, it sounds like.  That's good, but 
> only if we can use it to develop a product.  Development tools themselves 
> are not a product you can make money in anymore, I don't think; GNU 
> floods the market from one end with good free stuff, partly because code 
> artists have produced it "because it wasn't there" and partly in order 
> to fight the monopolist. Microsoft floods the market from the other end 
> with "loss leader" dev tools intended to lock people into their operating 
> systems.  Neither of them cares about making any money off their dev 
> tools, and we'd have to step into the middle of the fight and compete 
> with them both while having to care.  That's a losing game.  
>
> Speaking of games, Hmmm.  The massively-multiplayer online game industry
> is possible - but there are a whole lot of heavy hitters there already. 
> Still, the heavy hitters are all there to make a profit, which means it's 
> an infinitely better market than development tools.  Can such a game be 
> launched any more with less than a million dollars to prime the hype 
> engine?
>
> 				Bear

-- 
()
From: Alan Shutko
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <87smgfcao1.fsf@wesley.springies.com>
·······@noshpam.lbl.government writes:

> If you could offer them a compiler for a VHLL language that they
> use, to speed up the Gigabytes of genetic data that they spend
> *weeks* chugging through with Perl, I'm confident that they'd use
> that compiler.

A huge part of the problem is that few bioinformaticists are actually
good programmers.  A compiler won't help speed up an algorithm that
uses 1GB to load a small 23MB sequence.  They will patch five things
from CPAN haphazardly together and wonder why it's so slow and takes
too much memory.  Why?  Because each piece they took was designed to
do something else....  Even the big names aren't immune to this
problem.

-- 
Alan Shutko <···@acm.org> - I am the rocks.
From: Ray Dillinger
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <40514F59.5EDB09EE@sonic.net>
Alan Shutko wrote:
> 
> ·······@noshpam.lbl.government writes:
> 
> > If you could offer them a compiler for a VHLL language that they
> > use, to speed up the Gigabytes of genetic data that they spend
> > *weeks* chugging through with Perl, I'm confident that they'd use
> > that compiler.
> 
> A huge part of the problem is that few bioinformaticists are actually
> good programmers.  A compiler won't help speed up an algorithm that
> uses 1GB to load a small 23MB sequence.  They will patch five things
> from CPAN haphazardly together and wonder why it's so slow and takes
> too much memory.  Why?  Because each piece they took was designed to
> do something else....  Even the big names aren't immune to this
> problem.
> 

You describe a common pattern of naive programming, which is called 
exponential abstraction.

 I have observed that pattern in software development in many languages. 
It appears to be one of the downsides of excessive abstraction; people 
develop all these abstract parts, which each encompass massive complexity,
and then want to put them together to form "higher level abstractions" 
ad infinitum.  The problem arises because the "abstract parts" each 
contain their own unnecessary copies of infrastructure or functionality 
that is very similar.  Before too long, you wind up with some monstrosity 
that contains seventy-nine differently-named functions in various modules 
which do the same thing, and twelve different implementations of some 
moderately complicated function that use twelve different infrastructures, 
etc.  Exponential abstraction can become particularly egregious in shops 
which blindly worship Object-Oriented programming style and code reuse 
without regard to appropriateness or efficiency. 

Good programming consists in riding the line between abstraction and 
integration. Get too integrated, and you slide into spaghetti code and 
cryptychs and your code becomes opaque, brittle and hard to change. 
Get too abstracted, and you'll have a program that wallows in lameness,
with much duplication, many functions so far removed from the business 
at hand that you frankly have no idea why they're there, and quantities 
of source code so vast that just figuring out which bit of code does 
anything can be a major challenge. 

Anyway; -- if most Bioinformatics code is currently suffering from 
exponential abstraction, then there's an opportunity to provide much 
better code than they're using now.  But I'm not familiar enough with 
the field to know what the code would need to do or what the scope of 
the programming projects would be.  How may I become more enlightened 
as to the problems that need solved there?

				Bear
From: Alan Shutko
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <878yi62dqf.fsf@wesley.springies.com>
Ray Dillinger <····@sonic.net> writes:

> But I'm not familiar enough with the field to know what the code
> would need to do or what the scope of the programming projects would
> be.  How may I become more enlightened as to the problems that need
> solved there?

Well, I married a biochemist.  But there are often local groups
somewhat related.  So I'd look for local meetings of
bioinformaticists or biochemists.  Check to see if there are any
local mailing lists for that kind of thing.  I know there are some
general ones around.  Maybe lurk around the bioperl lists.  Once you
start hearing enough people, you'll be able to pick out what kinds of
problems people are working on.

To really work on them, though, you'll need to partner with a
biologist of some sort.  I think the best results come when there's
someone really good at biology combined with someone really good at
CS.  Fortunately, there aren't that many pairings like that right
now, so you have an opportunity.


-- 
Alan Shutko <···@acm.org> - I am the rocks.
You stroke me, I stroke back!
From: Lupo LeBoucher
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <jtSdnT0L99wE9s_d4p2dnA@io.com>
In article <·················@sonic.net>,
Ray Dillinger  <····@sonic.net> wrote:
>Alan Shutko wrote:
>
>Anyway; -- if most Bioinformatics code is currently suffering from 
>exponential abstraction, then there's an opportunity to provide much 
>better code than they're using now.  But I'm not familiar enough with 
>the field to know what the code would need to do or what the scope of 
>the programming projects would be.  How may I become more enlightened 
>as to the problems that need solved there?
>
>				Bear

Actually, given Lisp and some Norvig books (and some "glue" code), one 
could do truely astounding things in Bioinformatics. 

I have a couple of scientist friends at a to-remain-nameless local
bioinformatics company. They end up using Matlab (*eeeeeeeeeeewww*), talking 
to SQL, doing pattern matching and Bayesian network type stuff on big 
data sets. If they have a need to go fast, if you can imagine this, they 
send it to their in-house code monkeys who reimplement the work in VISUAL 
FOOKIN BASIC. This is one of the top biosciences companies in the known 
universe, mind you. Presumably, they choose their code-monkeys based on 
ready availability rather than actual competence. You could probably 
replace their whole department with two guys and a Lisp. You could also 
probably build some really neat code for them which they would pay for, if 
you sat down and talked to some of them.

Another ripe business opportunity, assuming you lack moral qualms about 
this sort of thing, is government contracting. There are all kinds of 
crypto AI thingees that are trivial to suck out of a book on Lisp. The one 
that leaps to mind immediately is the development of software which keeps 
track of error in finite element analysis codes. I now know how to do this 
using Norvig's mini-macsyma. It would be even tastier if you could hand 
them some kind of FEMLISP derivative which has all the bells and whistles 
they want. There are all manner of other such things, presumably aimed at 
SRI alum type small businesses. Government contracting is also probably a 
great way to get vertical market stuff done (Ross Perot made some money 
this way, as I recall). It is unfortunate that an awful lot of what 
doesn't work in government computing requires "glue" code, which Lisp is 
particularly bad at (compared to, say, Perl or Python).

There's about a zillion hard science things I can think of which are easy 
in Lisp (and would be the utter tits if Lisp came with a halfway decent 
graphing library, instead of piping results through Gnuplot/OpenDX/Matlab).
You can probably even make money doing some of these things. I plan on 
doing so, as soon as some time clears up for it.

Beyond that, as I recall from an examination of the CLISP source tree, the 
authors of CLISP are interested in financial analysis and derivatives type 
trading. That's the biggest pipe of money in the world, and they use 
fairly naieve methods. 

At some point I will come eat food with the other Bay Area lispers, if you 
promise not to make me foot the bill for my dislike of emacs and 
defsystem.

-Lupo                                               <··@pentagon.io.com>
"Java: the elegant simplicity of C++ and the blazing speed of Smalltalk"
From: Carl Shapiro
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <ouy3c8dv07p.fsf@panix3.panix.com>
··@io.com (Lupo LeBoucher) writes:
                          
                          It is unfortunate that an awful lot of what 
> doesn't work in government computing requires "glue" code, which Lisp is 
> particularly bad at (compared to, say, Perl or Python).

I would like to challenge you on this point.  I have occasionally
found myself inspecting library code from scripting languages to
translate back into Lisp.  As many people know, many scripting
language "modules" are assembled from two distinct halves: a foreign
library, usually of C code, and a veneer of native code, which
provides the external interface to the foreign library.

There is nothing particularly magical about this combination, and
people have been writing library interfaces for nearly two decades
this way with Lisp systems that support a foreign function interface.
Furthermore, it is just as easy (if not easier) to write foreign
function definitions in Lisp as it is, say, to decorate C code for use
with Perl.  Lisp's declarative style and excellent support for
macrology truly makes this a snap.  Perhaps the Perls and Pythons of
the world have better "interface generator" type tools, but as far as
I can tell, none of them predate the pervasive assumption that it is
easy to integrate Perl with C.
From: felix
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4052dc61$0$144$9b622d9e@news.freenet.de>
Carl Shapiro wrote:
> ··@io.com (Lupo LeBoucher) writes:
>                           
>                           It is unfortunate that an awful lot of what 
> 
>>doesn't work in government computing requires "glue" code, which Lisp is 
>>particularly bad at (compared to, say, Perl or Python).
> 
> 
> I would like to challenge you on this point.  I have occasionally
> found myself inspecting library code from scripting languages to
> translate back into Lisp.  As many people know, many scripting
> language "modules" are assembled from two distinct halves: a foreign
> library, usually of C code, and a veneer of native code, which
> provides the external interface to the foreign library.
> 
> There is nothing particularly magical about this combination, and
> people have been writing library interfaces for nearly two decades
> this way with Lisp systems that support a foreign function interface.
> Furthermore, it is just as easy (if not easier) to write foreign
> function definitions in Lisp as it is, say, to decorate C code for use
> with Perl.  Lisp's declarative style and excellent support for
> macrology truly makes this a snap.  Perhaps the Perls and Pythons of
> the world have better "interface generator" type tools, but as far as
> I can tell, none of them predate the pervasive assumption that it is
> easy to integrate Perl with C.

Indeed. Some Lisp/Scheme implementations give you extremely powerful
language features to integrate foreign code. Escpecially implementations
that compile to C, since here you can effectively use macros to generate
the (C) glue code. That's better than interface generators (which generate
a lot of garbage code anyway, IME).
And if you need interface-generators, then there is still SWIG, which
handles several Scheme implementations.


cheers,
felix
From: Lupo LeBoucher
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <XLydndX_huR23cvd4p2dnA@io.com>
In article <···············@panix3.panix.com>,
Carl Shapiro  <·············@panix.com> wrote:
>··@io.com (Lupo LeBoucher) writes:
>                          
>                          It is unfortunate that an awful lot of what 
>> doesn't work in government computing requires "glue" code, which Lisp is 
>> particularly bad at (compared to, say, Perl or Python).
>
>I would like to challenge you on this point.  I have occasionally
>found myself inspecting library code from scripting languages to
>translate back into Lisp.  

Well, that's sort of the point:

In Lisp-land you find yourself inspecting library code from scripting 
languages to translate back into Lisp. In Python-land, you don't have to 
do this.

-Lupo
"They are instructed in excellence in three things from age 5 to 20; to
ride, to draw the bow and to speak the truth." -Herodotus   <··@io.com> 
From: Carl Shapiro
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <ouyoeqxl7vf.fsf@panix3.panix.com>
··@io.com (Lupo LeBoucher) writes:

> Well, that's sort of the point:
> 
> In Lisp-land you find yourself inspecting library code from scripting 
> languages to translate back into Lisp. In Python-land, you don't have to 
> do this.

Having to inspect library code in language A to import into language B
does not make language A a better language for writing library code.
It merely means that language A happens to have one more library of
useful code for a given problem domain.  Or, to borrow your words, it
does not follow that language B becomes "particularly bad at" writing
"glue" code.
From: Lupo LeBoucher
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <goudnUirKbo_78bdRVn-tA@io.com>
In article <···············@panix3.panix.com>,
Carl Shapiro  <·············@panix.com> wrote:
>··@io.com (Lupo LeBoucher) writes:
>
>> Well, that's sort of the point:
>> 
>> In Lisp-land you find yourself inspecting library code from scripting 
>> languages to translate back into Lisp. In Python-land, you don't have to 
>> do this.
>
>Having to inspect library code in language A to import into language B
>does not make language A a better language for writing library code.
>It merely means that language A happens to have one more library of
>useful code for a given problem domain.  Or, to borrow your words, it
>does not follow that language B becomes "particularly bad at" writing
>"glue" code.

Um, yeah; in an ideal world, someone would take CPAN and shoe-horn it into 
CMU-CL (or even just shoe-horn the basic functionality of Perl itself 
into Lisp, like Graham is trying to do with Arc), and that could make a 
Lisp which is far more handy for "glue" code than Perl is, but nobody has 
done that, so we're stuck with shitty scripting languages for day to day 
work, or reimplementing language A in language B.

Not many people get paid to reimplement language A in language B once we 
get out of grad school. And it shows.

-Lupo
"it is at once, a theater of the absurd, a decomposing corpse, and an 
insane asylum."-former UN ambassador Pat Moynihan on the UN     <··@io.com>
From: Rob Warnock
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <Lf2dnfIiKKme5cHd3czS-g@speakeasy.net>
Lupo LeBoucher <··@io.com> wrote:
+---------------
| Um, yeah; in an ideal world, someone would take CPAN and shoe-horn it into 
| CMU-CL (or even just shoe-horn the basic functionality of Perl itself 
| into Lisp, like Graham is trying to do with Arc), and that could make a 
| Lisp which is far more handy for "glue" code than Perl is, but nobody has 
| done that, so we're stuck with shitty scripting languages for day to day 
| work, or reimplementing language A in language B.
+---------------

Well, just because CL isn't *perfect* for scripting doesn't mean it's
not awfully darned useful!! I use CMUCL for scripting lots of stuff.
Some recent examples:

"date-nist" -- fetches the current time from (one or more of)
    the NIST time server(s). The heart of it is this function:

	(defun fetch-time/rfc868 (host)
	  (let* ((fd (connect-to-inet-socket host 37))
		 (stream (system:make-fd-stream
			  fd
			  :element-type '(unsigned-byte 8)
			  :buffering :none)))
	    (with-open-stream (s stream)
	      (loop repeat 4			; [Thanks, Erik!]
		    for time = (read-byte s)
			     then (+ (read-byte s) (* time 256))
		finally (return time)))))

"csv_to_html" -- Convert a CSV (Comma-Separated Variables) file into
    an HTML table. The heart of it is this function:

	;;; PARSE-CSV-LINE -- Parse one CSV line into a list of fields,
	;;; stripping quotes and field-internal escape characters.
	;;; Lexical states: NORMAL QUOTED ESCAPED QUOTED+ESCAPED
	;;;
	(defun parse-csv-line (line)
	  (when (or (string= line "")           ; special-case blank lines
		    (char= #\# (char line 0)))  ; or those starting with "#"
	    (return-from parse-csv-line '()))
	  (loop for c across line
		with state = 'normal
		and results = '()
		and chars = '() do
	    (ecase state
	      ((normal)
	       (case c
		 ((#\") (setq state 'quoted))
		 ((#\\) (setq state 'escaped))
		 ((#\,)
		  (push (coerce (nreverse chars) 'string) results)
		  (setq chars '()))
		 (t (push c chars))))
	      ((quoted)
	       (case c
		 ((#\") (setq state 'normal))
		 ((#\\) (setq state 'quoted+escaped))
		 (t (push c chars))))
	      ((escaped) (push c chars) (setq state 'normal))
	      ((quoted+escaped) (push c chars) (setq state 'quoted)))
	    finally
	     (progn
	       ;; close still-open field
	       (push (coerce (nreverse chars) 'string) results)
	       (return (nreverse results)))))

"keto" -- Given a number of grams of protein, carbohydrate, and fat,
    show the total number of calories and the "ketogenic ratio" [useful
    to those on low-carb diets]:

	% cat ~/bin/keto
	#!/usr/local/bin/cmucl -script
	(cond
	  ((= 3 (length *script-args*))
	   (destructuring-bind (p c f)
	       (mapcar #'read-from-string *script-args*)
	     (format t "grams: protein ~a  carb ~a  fat ~a~%" p c f)
	     (format t "ketogenic ratio: ~a~%"
		    (/ (+ (* 0.9 f) (* 0.46 p))
		       (+ (* 1.0 c) (* 0.1 f) (* 0.58 p))))
	     (format t "total calories: ~a~%"
		       (+ (* 4 p) (* 4 c) (* 9 f)))))
	  (t
	    (format t "usage: ~a <protein> <carb> <fat>~%" *script-name*)
	    (format t "(enter all amounts in grams)~%")
	    (quit 1)))
	% keto.cl 8 6 17
	grams: protein 8  carb 6  fat 17
	ketogenic ratio: 1.5380875
	total calories: 209
	% 

"sum-time-user-sys" -- Do some trivial processing on the output of the
    "time" builtin command in "csh". [Shown in a previous post today.]

"random" -- Generate random strings which are legal as URLs, filenames,
    and passwords on most systems, thus shell and URL metacharacters
    must be excluded. Takes the time of day plus a number of bytes from
    "/dev/urandom" and uses it to seed the CMUCL MT19937 random number
    generator, then crank that some number of times and encode the output
    into acceptable text [6 bits per character] of the desired length
    [default 16]:

	% random
	WhiTrVcHUQw8XavI
	% random 64
	················@··········@esAY0z2FbDugx3yRKRp_aetqNzbh6EtoUG2w
	% repeat 3 random
	z0P0xuFm6rKBosee
	W0d_BWBiVZIkQvg2
	LJUHkFRrhfictQRv
	% repeat 5 random 8
	kPIbakFj
	6T0KvOB4
	2GINrD3S
	D39TQIV3
	1vjJ8KzQ
	% 

"wild" -- Construct more complex wild-card command than are natively
    convenient in Unix shell, e.g., the sort of thing TOPS-10 used to
    let you do, stuff like "ren *.foo *.bar" [which does *not* do the
    same thing at all in Unix!!]. It's just a loop around CL's DIRECTORY
    and TRANSLATE-PATHNAME and a FORMAT to print it all. It doesn't do
    the commands itself, it just outputs them, but if you like what you
    see then, you can repeat it and pipe the output to "/bin/sh -x" to
    execute them (the "-x" lets you see what's going on). Example:

	% ls -l foo1*
	-rwxr-xr-x  1 rpw3  rpw3  2099 Feb  6 03:52 foo1
	-rw-r--r--  1 rpw3  rpw3   223 Feb  6 03:48 foo1.lisp
	-rw-r--r--  1 rpw3  rpw3   882 Feb  6 03:52 foo1.x86f
	% wild
	usage: wild command pattern [ repl-patterns... ]
	% wild mv foo1\* bar2\*
	mv foo1 bar2
	mv foo1.lisp bar2.lisp
	mv foo1.x86f bar2.x86f
	% !! | sh -x
	wild mv foo1\* bar2\* | sh -x
	+ mv foo1 bar2
	+ mv foo1.lisp bar2.lisp
	+ mv foo1.x86f bar2.x86f
	% ls -l bar2*
	-rwxr-xr-x  1 rpw3  rpw3  2099 Feb  6 03:52 bar2
	-rw-r--r--  1 rpw3  rpw3   223 Feb  6 03:48 bar2.lisp
	-rw-r--r--  1 rpw3  rpw3   882 Feb  6 03:52 bar2.x86f
	% 

Lest I bore people [if I haven't already!], I'll stop there. But suffice
it to say that the more I use CL and become familiar with various nuances,
the more I use it *instead* of scripts in "sh" or Perl (as well as in
conjunction with them).


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Scott Schwartz
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <8gn05w7h41.fsf@galapagos.bx.psu.edu>
····@rpw3.org (Rob Warnock) writes:
> 	% cat ~/bin/keto
> 	#!/usr/local/bin/cmucl -script

You forgot to include the code to handle exceptions, disable the
debugger, defang read-eval, keep garbage from going to stdout, keep
stdin away from random repls, convincing it that end of file means to
stop reading and not to go for /dev/tty instead, etc.
From: Thomas F. Burdick
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <xcvfzcd5emw.fsf@famine.OCF.Berkeley.EDU>
··@io.com (Lupo LeBoucher) writes:

> In article <·················@sonic.net>,
> Ray Dillinger  <····@sonic.net> wrote:
> >Alan Shutko wrote:
> >
> >Anyway; -- if most Bioinformatics code is currently suffering from 
> >exponential abstraction, then there's an opportunity to provide much 
> >better code than they're using now.  But I'm not familiar enough with 
> >the field to know what the code would need to do or what the scope of 
> >the programming projects would be.  How may I become more enlightened 
> >as to the problems that need solved there?
> 
> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> could do truely astounding things in Bioinformatics. 

Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
biological knowledge to that.  The days of being able to get anything
from a naive/ignorant application of CS to biology are over.  We need
bioinformaticians who are *both* biologists and computer scientists.
Either one alone won't cut it, at least if you're looking for serious
progress.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Chris Hall
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <873c8cfy7s.fsf@naia.homelinux.net>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> ··@io.com (Lupo LeBoucher) writes:
> 
> > In article <·················@sonic.net>,
> > Ray Dillinger  <····@sonic.net> wrote:
> > >Alan Shutko wrote:
> > >
> > >Anyway; -- if most Bioinformatics code is currently suffering from 
> > >exponential abstraction, then there's an opportunity to provide much 
> > >better code than they're using now.  But I'm not familiar enough with 
> > >the field to know what the code would need to do or what the scope of 
> > >the programming projects would be.  How may I become more enlightened 
> > >as to the problems that need solved there?
> > 
> > Actually, given Lisp and some Norvig books (and some "glue" code), one 
> > could do truely astounding things in Bioinformatics. 
> 
> Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
> biological knowledge to that.  The days of being able to get anything
> from a naive/ignorant application of CS to biology are over.  We need
> bioinformaticians who are *both* biologists and computer scientists.
> Either one alone won't cut it, at least if you're looking for serious
> progress.
> 

I think this is good advice in general for any sort of 'fast mover'
specialized software company.

For me, coding has always proved to be one of the *shorter* parts of
development effort no matter what language is being used - determining
what needs doing, how to best fit the 'computerized' part into the
client's/employer's overall pre-existing, probably seriously kludgy
business workflow, integrating with any existing computerized
systems, arranging for backups, etc. seem to take far more time and
are at least as critical to a successful implementation and happy
customer.

And let's not forget those meetings to set up meetings!

Mostly though it seems to be prying something resembling real
requirements out of the client's gestalt (for lack of a better term),
since they rarely know what we hacker types can truly offer, or if
they do know they may have difficulty expressing it in terms *we* can
grok, and as we stumble along together and they start 'getting it'
ooops!  the requirements somehow change.  Or we start 'getting it' and
ooops!  start over coding again from scratch.

My 'point': the one type of XP-style pair programming I never hear
about, but that has worked very well for me a few times: me and a
domain expert in front of a tube getting some work done - one time we
started out planning and ended up re-working the customer-written
requirements doc, another time it was getting a set of particularly
complicated and inter-related business algorithms to play well
together via compile-link-run-print cycles.

I know I enjoyed working that way - having the expert next to me - and
I think the domain experts were rather taken with the results as well:
they seemed to derive real satisfaction from seeing their input
'realized', that they could affect the design like that.  They also
seemed to be quite amazed at how malleable software can be.

The challenges in doing this sort of development are to plan the
sessions to keep the experts directly involved so that they aren't
wasting their time, and persuading management to dedicate highly
skilled employees for extended periods of time.  A 50% time commitment
on the part of the domain expert for a week or two, to be used as
needed, was where we generally seemed to settle, which was great since
one always has other code, paperwork, preparing for the next session,
etc., and the domain experts don't miss a whole lot of their normal
work day.  The actual time in front of the tube together was maybe 1/4
of the allocation - the rest was discussion and/or research - but the
process was invaluable.

To develop a real-world bioinformatics app in anything approaching a
rapid or responsive way with any real chance of success would seem to
require at least some of this sort of 'pair programming'.  Best of
luck!

I just *love* putting computers to work for people. Sigh. :-D

+ CJHall

-- 
Democracy: The worship of jackals by jackasses.
-- H.L. Mencken
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkad2haze0.fsf@thar.lbl.gov>
Chris Hall <·······@verizon.net> writes:

> To develop a real-world bioinformatics app in anything approaching a
> rapid or responsive way with any real chance of success would seem to
> require at least some of this sort of 'pair programming'.  Best of
> luck!

I agree with you 100% on this. And this is how expert systems
programming should be done. Wait, am I suggesting something? :-)

~Tomer

>
> I just *love* putting computers to work for people. Sigh. :-D
>
> + CJHall
>
> -- 
> Democracy: The worship of jackals by jackasses.
> -- H.L. Mencken

-- 
()
From: Lupo LeBoucher
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <XLydndr_huRSosvd4p2dnA@io.com>
In article <···············@famine.OCF.Berkeley.EDU>,
Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
>··@io.com (Lupo LeBoucher) writes:
>
>> In article <·················@sonic.net>,
>> Ray Dillinger  <····@sonic.net> wrote:
>> >Alan Shutko wrote:
>> >
>> >Anyway; -- if most Bioinformatics code is currently suffering from 
>> >exponential abstraction, then there's an opportunity to provide much 
>> >better code than they're using now.  But I'm not familiar enough with 
>> >the field to know what the code would need to do or what the scope of 
>> >the programming projects would be.  How may I become more enlightened 
>> >as to the problems that need solved there?
>> 
>> Actually, given Lisp and some Norvig books (and some "glue" code), one 
>> could do truely astounding things in Bioinformatics. 
>
>Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
>biological knowledge to that.  The days of being able to get anything
>from a naive/ignorant application of CS to biology are over.  We need
>bioinformaticians who are *both* biologists and computer scientists.
>Either one alone won't cut it, at least if you're looking for serious
>progress.

The guys I know who do this sort of thing are all physicists, and they 
seem to get a lot of research done without knowing much CS or biology. 
They seem to bring a rather different and useful skillset to the plate 
than biologists or CS types do. Perhaps because they're used to applying 
mathematical ideas to, like, reality.

As for actually making money doing this sort of thing (which is the 
subject of this discussion), there is certainly plenty of room for CS 
people to do so without knowing a thing about biology, as I have already 
pointed out.

-Lupo
'Sex is the mysticism of materialism.'-Malcolm Muggeridge     <··@io.com>
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pk65d5azbj.fsf@thar.lbl.gov>
··@io.com (Lupo LeBoucher) writes:

> As for actually making money doing this sort of thing (which is the 
> subject of this discussion), there is certainly plenty of room for CS 
> people to do so without knowing a thing about biology, as I have already 
> pointed out.

I agree with you in principle, but I'm still missing the
details. Could you elaborate?

~Tomer

>
> -Lupo
> 'Sex is the mysticism of materialism.'-Malcolm Muggeridge     <··@io.com>

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pk4qspazbe.fsf@thar.lbl.gov>
··@io.com (Lupo LeBoucher) writes:

> As for actually making money doing this sort of thing (which is the 
> subject of this discussion), there is certainly plenty of room for CS 
> people to do so without knowing a thing about biology, as I have already 
> pointed out.

I agree with you in principle, but I'm still missing the
details. Could you elaborate?

~Tomer

>
> -Lupo
> 'Sex is the mysticism of materialism.'-Malcolm Muggeridge     <··@io.com>

-- 
()
From: Thomas F. Burdick
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <xcv65d35nxf.fsf@famine.OCF.Berkeley.EDU>
··@io.com (Lupo LeBoucher) writes:

> In article <···············@famine.OCF.Berkeley.EDU>,
> Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
> >··@io.com (Lupo LeBoucher) writes:
> >
> >> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> >> could do truely astounding things in Bioinformatics. 
> >
> >Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
> >biological knowledge to that.  The days of being able to get anything
> >from a naive/ignorant application of CS to biology are over.  We need
> >bioinformaticians who are *both* biologists and computer scientists.
> >Either one alone won't cut it, at least if you're looking for serious
> >progress.
> 
> The guys I know who do this sort of thing are all physicists, and they 
> seem to get a lot of research done without knowing much CS or biology. 
> They seem to bring a rather different and useful skillset to the plate 
> than biologists or CS types do. Perhaps because they're used to applying 
> mathematical ideas to, like, reality.

Wow, so are biologists.  And this is irrelevant, weren't you just
talking about computer scientists with no science training a second ago?

> As for actually making money doing this sort of thing (which is the 
> subject of this discussion), there is certainly plenty of room for CS 
> people to do so without knowing a thing about biology, as I have already 
> pointed out.

Wow, I see your expert debating techniques are back at work.  This
pretty much amounts to a long-winded, "uh-huh, can too."  You didn't
point anything out, you asserted it.  "Pointing something out" implies
that you had anything to back you up.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Lupo LeBoucher
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <AqidnTVd65BA8sbdRVn-sw@io.com>
In article <···············@famine.OCF.Berkeley.EDU>,
Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
>··@io.com (Lupo LeBoucher) writes:
>
>> In article <···············@famine.OCF.Berkeley.EDU>,
>> Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
>> >··@io.com (Lupo LeBoucher) writes:
>> >
>> >> Actually, given Lisp and some Norvig books (and some "glue" code), one 
>> >> could do truely astounding things in Bioinformatics. 
>> >
>> >Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
>> >biological knowledge to that.  The days of being able to get anything
>> >from a naive/ignorant application of CS to biology are over.  We need
>> >bioinformaticians who are *both* biologists and computer scientists.
>> >Either one alone won't cut it, at least if you're looking for serious
>> >progress.
>> 
>> The guys I know who do this sort of thing are all physicists, and they 
>> seem to get a lot of research done without knowing much CS or biology. 
>> They seem to bring a rather different and useful skillset to the plate 
>> than biologists or CS types do. Perhaps because they're used to applying 
>> mathematical ideas to, like, reality.
>
>Wow, so are biologists.

Typically, not, really. Which is why there are all manner of fellowships 
devoted to converting hard scientists into biologists who know math. It's 
also why so many Nobel prizes in biology go to people with physics and 
chemistry training. This seems to be changing somewhat, but relatively slowly.

>  And this is irrelevant, weren't you just
>talking about computer scientists with no science training a second ago?

I dunno man; you brought it up.

>> As for actually making money doing this sort of thing (which is the 
>> subject of this discussion), there is certainly plenty of room for CS 
>> people to do so without knowing a thing about biology, as I have already 
>> pointed out.
>
>Wow, I see your expert debating techniques are back at work.  This
>pretty much amounts to a long-winded, "uh-huh, can too."  You didn't
>point anything out, you asserted it.  "Pointing something out" implies
>that you had anything to back you up.

*snork*
You really got a chip on your shoulder, dontcha?
What's the matter; did brother Lupo piss in your beer? 

I have already pointed out several specific ways in which Lispey CS people 
can make money in Bioinformatics-land, and in a specific large company in 
the Bay Area, even. Viz; you could vastly outpace the productivity of the
crew of Visual Basic weenies who do their "production code" and you could 
do a lot better than the crap available in Matlab packages at pattern 
matching and Baysean searches and fancy database queries by sitting down 
with Lisp and some Norvig books. Just organizing the database queries in a 
clever way is probably a marketable product.

I've also pointed out two areas in government contracting one could make 
some money using Lisp. I've been specific enough, if you know how to run a 
search engine, you can figure out what the damned grant numbers are (or, 
you could just ask me).

You have sneered a little, which is entertaining, but it isn't much of a 
contribution to the discussion, now is it?

-Lupo
"The earth has people of two kinds: The ones who think have no religion, 
the others do and have no minds." -Abu al-Ala al Maarri, 11th century poet
                            <··@io.com>
From: Björn Lindberg
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <hcsr7vlb4r9.fsf@fnatte.nada.kth.se>
··@io.com (Lupo LeBoucher) writes:

> In article <···············@famine.OCF.Berkeley.EDU>,
> Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
> >··@io.com (Lupo LeBoucher) writes:
> >
> >> In article <···············@famine.OCF.Berkeley.EDU>,
> >> Thomas F. Burdick <···@famine.OCF.Berkeley.EDU> wrote:
> >> >··@io.com (Lupo LeBoucher) writes:
> >> >
> >> >> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> >> >> could do truely astounding things in Bioinformatics. 
> >> >
> >> >Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
> >> >biological knowledge to that.  The days of being able to get anything
> >> >from a naive/ignorant application of CS to biology are over.  We need
> >> >bioinformaticians who are *both* biologists and computer scientists.
> >> >Either one alone won't cut it, at least if you're looking for serious
> >> >progress.
> >> 
> >> The guys I know who do this sort of thing are all physicists, and they 
> >> seem to get a lot of research done without knowing much CS or biology. 
> >> They seem to bring a rather different and useful skillset to the plate 
> >> than biologists or CS types do. Perhaps because they're used to applying 
> >> mathematical ideas to, like, reality.
> >
> >Wow, so are biologists.
> 
> Typically, not, really. Which is why there are all manner of fellowships 
> devoted to converting hard scientists into biologists who know math. It's 
> also why so many Nobel prizes in biology go to people with physics and 
> chemistry training. This seems to be changing somewhat, but relatively slowly.

Except that there isn't a Nobel Prize in biology.


Bj�rn
From: David Steuber
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <m2vfkwmp5b.fsf@david-steuber.com>
·······@nada.kth.se (Bj�rn Lindberg) writes:

> Except that there isn't a Nobel Prize in biology.

No, they have the Darwin Awards.

-- 
It would not be too unfair to any language to refer to Java as a
stripped down Lisp or Smalltalk with a C syntax.
--- Ken Anderson
    http://openmap.bbn.com/~kanderso/performance/java/index.html
From: Michele Simionato
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <95aa1afa.0403192226.72025758@posting.google.com>
··@io.com (Lupo LeBoucher) wrote in message news:<······················@io.com>...
> The guys I know who do this sort of thing are all physicists, and they 
> seem to get a lot of research done without knowing much CS or biology. 
> They seem to bring a rather different and useful skillset to the plate 
> than biologists or CS types do. Perhaps because they're used to applying 
> mathematical ideas to, like, reality.


Uhmm ... I cannot resist the temptation to quote Joseph Likken:

Physicists used to be smarter and more arrogant than biologist;
now, we are just smarter. -- Joseph Likken, DPF 2000 Ohio State University
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkfzc9azjx.fsf@thar.lbl.gov>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> ··@io.com (Lupo LeBoucher) writes:
>
>> In article <·················@sonic.net>,
>> Ray Dillinger  <····@sonic.net> wrote:
>> >Alan Shutko wrote:
>> >
>> >Anyway; -- if most Bioinformatics code is currently suffering from 
>> >exponential abstraction, then there's an opportunity to provide much 
>> >better code than they're using now.  But I'm not familiar enough with 
>> >the field to know what the code would need to do or what the scope of 
>> >the programming projects would be.  How may I become more enlightened 
>> >as to the problems that need solved there?
>> 
>> Actually, given Lisp and some Norvig books (and some "glue" code), one 
>> could do truely astounding things in Bioinformatics. 
>
> Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
> biological knowledge to that.  The days of being able to get anything
> from a naive/ignorant application of CS to biology are over.  We need
> bioinformaticians who are *both* biologists and computer scientists.
> Either one alone won't cut it, at least if you're looking for serious
> progress.

Well, I wouldn't be so quick on that. I think that there are still
many untapped methods/theories/algorithms/approaches in biology and
computer science which can be matched with problems in either field to
yield progress. But you're right: the obvious ones have already been
explored, and further development along these tracks require someone
with a solid grounding in both fields.

As I alluded to in another post, the key here is integration:

* integrate the data (bioDBMS' are a huge area of research)
* integrate the tools (allow all those theoretical tools to interoperate)
* agree on representation standards ( XML friendly )

There are already some platforms which are allowing this in a nascent
form:

BioCYC (implemented in LISP!)
www.biocyc.org

BioSPICE:
www.biospice.org

The Systems Biology Workbench:
www.sbw-sbml.org

I'm not done yet, but that's it for now!

~Tomer

>
> -- 
>            /|_     .----------------------.                        
>          ,'  .\  / | No to Imperative war |                        
>      ,--'    _,'   | Wage cons war!       |                        
>     /       /      `----------------------'                        
>    (   -.  |                               
>    |     ) |                               
>   (`-.  '--.)                              
>    `. )----'                               

;-)

-- 
()
From: Thomas F. Burdick
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <xcv3c875nbt.fsf@famine.OCF.Berkeley.EDU>
·······@noshpam.lbl.government writes:

> ···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> 
> > ··@io.com (Lupo LeBoucher) writes:
> >
> >> In article <·················@sonic.net>,
> >> Ray Dillinger  <····@sonic.net> wrote:
> >> >Alan Shutko wrote:
> >> >
> >> >Anyway; -- if most Bioinformatics code is currently suffering from 
> >> >exponential abstraction, then there's an opportunity to provide much 
> >> >better code than they're using now.  But I'm not familiar enough with 
> >> >the field to know what the code would need to do or what the scope of 
> >> >the programming projects would be.  How may I become more enlightened 
> >> >as to the problems that need solved there?
> >> 
> >> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> >> could do truely astounding things in Bioinformatics. 
> >
> > Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
> > biological knowledge to that.  The days of being able to get anything
> > from a naive/ignorant application of CS to biology are over.  We need
> > bioinformaticians who are *both* biologists and computer scientists.
> > Either one alone won't cut it, at least if you're looking for serious
> > progress.
> 
> Well, I wouldn't be so quick on that. I think that there are still
> many untapped methods/theories/algorithms/approaches in biology and
> computer science which can be matched with problems in either field to
> yield progress. But you're right: the obvious ones have already been
> explored, and further development along these tracks require someone
> with a solid grounding in both fields.

I suspect that part of the reason that some CS folks think that they
don't need a good biological grounding is because they don't have one,
so they don't know much of the sordid history of the unthinking
application of analytical techniques to biological problems.  It seems
to be particularly easy to find somewhat convincing mathematical
artifacts in biological systems.  At this point in history, I think
it's safe to say that if you can't make a strong materialist argument
as to why it makes sense to apply some analytical technique to some
biological problem, odds are you're going to do more harm than good.

> As I alluded to in another post, the key here is integration:
> 
> * integrate the data (bioDBMS' are a huge area of research)
> * integrate the tools (allow all those theoretical tools to interoperate)

No doubt.  An unfortunate amount of programmer time goes into gluing
things together that should have fit in the first place.

> * agree on representation standards ( XML friendly )

Yuck, but yeah, XML is probably going to be the necessary backbone here.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkhdwnm5cr.fsf@thar.lbl.gov>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

[snip]

> I suspect that part of the reason that some CS folks think that they
> don't need a good biological grounding is because they don't have one,
> so they don't know much of the sordid history of the unthinking
> application of analytical techniques to biological problems.  It seems

Well, my experience in my lab at least has been with the computer
scientists being intimidated out of their mind. And vice-versa for
biologists having to code. Everyone in our lab feels inadequate,
because they have to work with co-workers who are experts in a
different field than what they studied. So everyone's learning
something different.

> to be particularly easy to find somewhat convincing mathematical
> artifacts in biological systems.  At this point in history, I think
> it's safe to say that if you can't make a strong materialist argument
> as to why it makes sense to apply some analytical technique to some
> biological problem, odds are you're going to do more harm than good.

True, it should be only applied when it can be justified. My
hypothesis is that there is a lot of potential areas of such application.

>
>> As I alluded to in another post, the key here is integration:
>> 
>> * integrate the data (bioDBMS' are a huge area of research)
>> * integrate the tools (allow all those theoretical tools to interoperate)
>
> No doubt.  An unfortunate amount of programmer time goes into gluing
> things together that should have fit in the first place.
>
>> * agree on representation standards ( XML friendly )
>
> Yuck, but yeah, XML is probably going to be the necessary backbone here.

Well, if we can't get the heathens to s-exprs, then XML is better than
nothing... :-)

~Tomer

>
> -- 
>            /|_     .----------------------.                        
>          ,'  .\  / | No to Imperative war |                        
>      ,--'    _,'   | Wage cons war!       |                        
>     /       /      `----------------------'                        
>    (   -.  |                               
>    |     ) |                               
>   (`-.  '--.)                              
>    `. )----'                               

-- 
()
From: Ng Pheng Siong
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <c3av3l$6u9$2@mawar.singnet.com.sg>
According to  <·······@noshpam.lbl.government>:
> Well, my experience in my lab at least has been with the computer
> scientists being intimidated out of their mind. And vice-versa for
> biologists having to code. Everyone in our lab feels inadequate,
> because they have to work with co-workers who are experts in a
> different field than what they studied. So everyone's learning
> something different.

"Being intimidated" == "fearful of being found out"? 

I don't believe an expert in any particular field will be intimated by an
expert in another field when they have to work together. Each will learn to
trust the judgement of the other.

Not talking about your lab specifically, mind.

Cheers.

-- 
Ng Pheng Siong <····@netmemetic.com> 

http://firewall.rulemaker.net -+- Firewall Change Management & Version Control
http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkfzc6cxhw.fsf@thar.lbl.gov>
····@netmemetic.com (Ng Pheng Siong) writes:

> According to  <·······@noshpam.lbl.government>:
>> Well, my experience in my lab at least has been with the computer
>> scientists being intimidated out of their mind. And vice-versa for
>> biologists having to code. Everyone in our lab feels inadequate,
>> because they have to work with co-workers who are experts in a
>> different field than what they studied. So everyone's learning
>> something different.
>
> "Being intimidated" == "fearful of being found out"? 
>
> I don't believe an expert in any particular field will be intimated by an
> expert in another field when they have to work together. Each will learn to
> trust the judgement of the other.

Well, since our lab throws together physics, biology, chemistry,
computer science, and statistics, there's a lot of basics for a
new-come'er to pick up to get the gist of others' research. Just a
steep learning curve, for post-docs & grad students used to being in a
much more homogeneous environment.

> Not talking about your lab specifically, mind.

Dutifully noted, ;-)

~Tomer

>
> Cheers.
>
> -- 
> Ng Pheng Siong <····@netmemetic.com> 
>
> http://firewall.rulemaker.net -+- Firewall Change Management & Version Control
> http://sandbox.rulemaker.net/ngps -+- Open Source Python Crypto & SSL

-- 
()
From: David Steuber
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <m23c86vb3l.fsf@david-steuber.com>
·······@noshpam.lbl.government writes:

> Well, my experience in my lab at least has been with the computer
> scientists being intimidated out of their mind. And vice-versa for
> biologists having to code. Everyone in our lab feels inadequate,
> because they have to work with co-workers who are experts in a
> different field than what they studied. So everyone's learning
> something different.

That actually sounds like a fun place to work.

-- 
Those who do not remember the history of Lisp are doomed to repeat it,
badly.

> (dwim x)
NIL
From: James P. Massar
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <vfkh50l6uvmtau98gajq26vfttauecg9jh@4ax.com>
 
>There are already some platforms which are allowing this in a nascent
>form:
>
>BioCYC (implemented in LISP!)
>www.biocyc.org
>
>BioSPICE:
>www.biospice.org
>
>The Systems Biology Workbench:
>www.sbw-sbml.org
>
 
We're trying todo  something along those lines as well:

The BioLingua project

http://nostoc.stanford.edu/Docs/index.html

http://sourceforge.net/projects/biolingua/
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkekrtazir.fsf@thar.lbl.gov>
···@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> ··@io.com (Lupo LeBoucher) writes:
>
>> In article <·················@sonic.net>,
>> Ray Dillinger  <····@sonic.net> wrote:
>> >Alan Shutko wrote:
>> >
>> >Anyway; -- if most Bioinformatics code is currently suffering from 
>> >exponential abstraction, then there's an opportunity to provide much 
>> >better code than they're using now.  But I'm not familiar enough with 
>> >the field to know what the code would need to do or what the scope of 
>> >the programming projects would be.  How may I become more enlightened 
>> >as to the problems that need solved there?
>> 
>> Actually, given Lisp and some Norvig books (and some "glue" code), one 
>> could do truely astounding things in Bioinformatics. 
>
> Ha ha ha!  Oh, you weren't kidding?  Uhm, try adding a bunch of
> biological knowledge to that.  The days of being able to get anything
> from a naive/ignorant application of CS to biology are over.  We need
> bioinformaticians who are *both* biologists and computer scientists.
> Either one alone won't cut it, at least if you're looking for serious
> progress.

Well, I wouldn't be so quick on that. I think that there are still
many untapped methods/theories/algorithms/approaches in biology and
computer science which can be matched with problems in either field to
yield progress. But you're right: the obvious ones have already been
explored, and further development along these tracks require someone
with a solid grounding in both fields.

As I alluded to in another post, the key here is integration:

* integrate the data (bioDBMS's' are a huge area of research)
* integrate the tools (allow all those theoretical tools to interoperate)
* agree on representation standards ( XML friendly )

There are already some platforms which are allowing this in a nascent
form:

BioCYC (implemented in LISP!)
www.biocyc.org

BioSPICE:
www.biospice.org

The Systems Biology Workbench:
www.sbw-sbml.org

I'm not done yet, but that's it for now!

~Tomer

>
> -- 
>            /|_     .----------------------.                        
>          ,'  .\  / | No to Imperative war |                        
>      ,--'    _,'   | Wage cons war!       |                        
>     /       /      `----------------------'                        
>    (   -.  |                               
>    |     ) |                               
>   (`-.  '--.)                              
>    `. )----'                               

;-)

-- 
()
From: Paolo Amoroso
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <87vfl8ahlj.fsf@plato.moon.paoloamoroso.it>
[following up to comp.lang.lisp only]

··@io.com (Lupo LeBoucher) writes:

> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> could do truely astounding things in Bioinformatics. 
[...]
> replace their whole department with two guys and a Lisp. You could also 
> probably build some really neat code for them which they would pay for, if 
> you sat down and talked to some of them.

Most probably you are already aware of this:

  BioLisp.org
  http://www.biolisp.org


Paolo
-- 
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkllm1azzq.fsf@thar.lbl.gov>
··@io.com (Lupo LeBoucher) writes:

[snip]

> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> could do truely astounding things in Bioinformatics. 

If you have something in mind, please; give, give!

As for good glue code, Scheme/Lisp can do the job.

>
> I have a couple of scientist friends at a to-remain-nameless local
> bioinformatics company. They end up using Matlab (*eeeeeeeeeeewww*), talking 
> to SQL, doing pattern matching and Bayesian network type stuff on big 
> data sets. If they have a need to go fast, if you can imagine this, they 
> send it to their in-house code monkeys who reimplement the work in VISUAL 
> FOOKIN BASIC. This is one of the top biosciences companies in the known 
> universe, mind you. Presumably, they choose their code-monkeys based on 
> ready availability rather than actual competence. You could probably 
> replace their whole department with two guys and a Lisp. You could also 
> probably build some really neat code for them which they would pay for, if 
> you sat down and talked to some of them.

I see two ideas here, neither which is fully developed:
* use Lisp to do things in Bioinformatics which are otherwise
hard/impossible.
* speed up existing development tools.

Could you elaborate?

> them some kind of FEMLISP derivative which has all the bells and whistles 

At first, I thought you were referring to some sort of gender-specific
implementation. :-}

> Beyond that, as I recall from an examination of the CLISP source tree, the 
> authors of CLISP are interested in financial analysis and derivatives type 
> trading. That's the biggest pipe of money in the world, and they use 
> fairly naieve methods. 

There's a guy at my lab who started out in theoretical physics in
Russia, moved to NY, started in Burger King while learning English,
worked for years as a Technical Analyst for some high finance firm,
and now is "retired" as a post-doc. :-) So I know that the good
technical analysis group of a financial research department actually
use some very sophisticated methods.

> At some point I will come eat food with the other Bay Area lispers, if you 
> promise not to make me foot the bill for my dislike of emacs and 
> defsystem.

What's the saying? 'Hit the philistines three times on the head with
the Elisp Manual.' ;-)

~Tomer

>
> -Lupo                                               <··@pentagon.io.com>
> "Java: the elegant simplicity of C++ and the blazing speed of Smalltalk"
>

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkk71lazzg.fsf@thar.lbl.gov>
··@io.com (Lupo LeBoucher) writes:

[snip]

> Actually, given Lisp and some Norvig books (and some "glue" code), one 
> could do truely astounding things in Bioinformatics. 

If you have something in mind, please; give, give!

As for good glue code, Scheme/Lisp can do the job.

>
> I have a couple of scientist friends at a to-remain-nameless local
> bioinformatics company. They end up using Matlab (*eeeeeeeeeeewww*), talking 
> to SQL, doing pattern matching and Bayesian network type stuff on big 
> data sets. If they have a need to go fast, if you can imagine this, they 
> send it to their in-house code monkeys who reimplement the work in VISUAL 
> FOOKIN BASIC. This is one of the top biosciences companies in the known 
> universe, mind you. Presumably, they choose their code-monkeys based on 
> ready availability rather than actual competence. You could probably 
> replace their whole department with two guys and a Lisp. You could also 
> probably build some really neat code for them which they would pay for, if 
> you sat down and talked to some of them.

I see two ideas here, neither which is fully developed:
* use Lisp to do things in Bioinformatics which are otherwise
hard/impossible.
* speed up existing development tools.

Could you elaborate?

> them some kind of FEMLISP derivative which has all the bells and whistles 

At first, I thought you were referring to some sort of gender-specific
implementation. :-}

> Beyond that, as I recall from an examination of the CLISP source tree, the 
> authors of CLISP are interested in financial analysis and derivatives type 
> trading. That's the biggest pipe of money in the world, and they use 
> fairly naieve methods. 

There's a guy at my lab who started out in theoretical physics in
Russia, moved to NY, started in Burger King while learning English,
worked for years as a Technical Analyst for some high finance firm,
and now is "retired" as a post-doc. :-) So I know that the good
technical analysis group of a financial research department actually
use some very sophisticated methods.

> At some point I will come eat food with the other Bay Area lispers, if you 
> promise not to make me foot the bill for my dislike of emacs and 
> defsystem.

What's the saying? 'Hit the philistines three times on the head with
the Elisp Manual.' ;-)

~Tomer

>
> -Lupo                                               <··@pentagon.io.com>
> "Java: the elegant simplicity of C++ and the blazing speed of Smalltalk"
>

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkr7vtb0j4.fsf@thar.lbl.gov>
Ray Dillinger <····@sonic.net> writes:

> Alan Shutko wrote:
>> 
>> ·······@noshpam.lbl.government writes:
>> 
>> > If you could offer them a compiler for a VHLL language that they
>> > use, to speed up the Gigabytes of genetic data that they spend
>> > *weeks* chugging through with Perl, I'm confident that they'd use
>> > that compiler.
>> 
>> A huge part of the problem is that few bioinformaticists are actually
>> good programmers.  A compiler won't help speed up an algorithm that
>> uses 1GB to load a small 23MB sequence.  They will patch five things
>> from CPAN haphazardly together and wonder why it's so slow and takes
>> too much memory.  Why?  Because each piece they took was designed to
>> do something else....  Even the big names aren't immune to this
>> problem.
>> 
>
> You describe a common pattern of naive programming, which is called 
> exponential abstraction.
>

[snnnniiipppp]

>
> Anyway; -- if most Bioinformatics code is currently suffering from 
> exponential abstraction, then there's an opportunity to provide much 
> better code than they're using now.  But I'm not familiar enough with 
> the field to know what the code would need to do or what the scope of 
> the programming projects would be.  How may I become more enlightened 
> as to the problems that need solved there?

Here's the skinny:

Bioinformaticists work in a heterogeneous software environment, but a
lot of the computational work gets done in:

* Matlab
* Perl
* GNU R (statistical calculation interpreter)

So, if you're using Perl, then the first place you should look is:

* bioperl.org

This is a site which sponsors "bio" + (fill in the blank) + ".org"
development efforts:

* bioinformatics.org

A lot of the work is in data curation and integration, so that
meaningful calculations can be done of the system behavior.

I work with a group that's developing DARPA BioSPICE, and their work
is geared towards that very thing. Yes, it is
modeled after the celebrated SPICE analysis package. The key insight
here for the non-bio minded, is that the cell is nothing more than a
noisy, stochastic, analog, nonlinear system. :-)

www.biospice.org

Whew! I have lots more to say, but that's it for now.

~Tomer

>
> 				Bear

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkptbdb0hl.fsf@thar.lbl.gov>
Ray Dillinger <····@sonic.net> writes:

> Alan Shutko wrote:
>> 
>> ·······@noshpam.lbl.government writes:
>> 
>> > If you could offer them a compiler for a VHLL language that they
>> > use, to speed up the Gigabytes of genetic data that they spend
>> > *weeks* chugging through with Perl, I'm confident that they'd use
>> > that compiler.
>> 
>> A huge part of the problem is that few bioinformaticists are actually
>> good programmers.  A compiler won't help speed up an algorithm that
>> uses 1GB to load a small 23MB sequence.  They will patch five things
>> from CPAN haphazardly together and wonder why it's so slow and takes
>> too much memory.  Why?  Because each piece they took was designed to
>> do something else....  Even the big names aren't immune to this
>> problem.
>> 
>
> You describe a common pattern of naive programming, which is called 
> exponential abstraction.
>

[snnnniiipppp]

>
> Anyway; -- if most Bioinformatics code is currently suffering from 
> exponential abstraction, then there's an opportunity to provide much 
> better code than they're using now.  But I'm not familiar enough with 
> the field to know what the code would need to do or what the scope of 
> the programming projects would be.  How may I become more enlightened 
> as to the problems that need solved there?

Here's the skinny:

Bioinformaticists work in a heterogeneous software environment, but a
lot of the computational work gets done in:

* Matlab
* Perl
* GNU R (statistical calculation interpreter)

So, if you're using Perl, then the first place you should look is:

* bioperl.org

This is a site which sponsors "bio" + (fill in the blank) + ".org"
development efforts:

* bioinformatics.org

A lot of the work is in data curation and integration, so that
meaningful calculations can be done of the system behavior.

I work with a group that's developing DARPA BioSPICE, and their work
is geared towards that very thing. Yes, it is
modeled after the celebrated SPICE analysis package. The key insight
here for the non-bio minded, is that the cell is nothing more than a
noisy, stochastic, analog, nonlinear system. :-)

www.biospice.org

Whew! I have lots more to say, but that's it for now.

~Tomer

>
> 				Bear

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkvfl5b0xq.fsf@thar.lbl.gov>
Alan Shutko <···@acm.org> writes:

> ·······@noshpam.lbl.government writes:
>
>> If you could offer them a compiler for a VHLL language that they
>> use, to speed up the Gigabytes of genetic data that they spend
>> *weeks* chugging through with Perl, I'm confident that they'd use
>> that compiler.
>
> A huge part of the problem is that few bioinformaticists are actually
> good programmers.  A compiler won't help speed up an algorithm that

This is somewhat true. In the lab that I work in, there are all types:
* CS types learning biology
* bio types learning CS
* wizards who can hold their own in both fields.

The one thing that they share is Perl as a crutch. This is due to
cultural and historical reasons...

> uses 1GB to load a small 23MB sequence.  They will patch five things
> from CPAN haphazardly together and wonder why it's so slow and takes
> too much memory.  Why?  Because each piece they took was designed to
> do something else....  Even the big names aren't immune to this
> problem.

Very true.

~Tomer

>
> -- 
> Alan Shutko <···@acm.org> - I am the rocks.
>

-- 
()
From: David Fisher
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <14030ca9.0403141626.fcb9c02@posting.google.com>
·······@noshpam.lbl.government wrote in message news:<···············@thar.lbl.gov>...
>
> Well, here's a "vertical/horizontal" that I'm surprised hasn't been
> exploited:
> 
> Scheme/Lisp has fantastic (meta)linguistic capabilities. One thing
> that I could imagine would be taking VHLLs which are common in
> industry, and building compilers for them by first 'compiling' them to
> Lisp/Scheme. 
> 
> When I see how addicted bioinformaticists are to Perl, for example, I
> see a gold mine of opportunity. If you could offer them a compiler for
> a VHLL language that they use, 

What bioinformatics are you talking about, specifically? Most
bioinformatics seems to be either some trivial stuff: accessing online
databases, parsing of their internal formats, saving data to disk,
serving data to client apps, for which Perl is a semi-reasonable
solution; or it's high-performance stuff like sequence analysis. Some
do abuse Perl and use it for the performance-demanding tasks, but it's
only because they truly believe Perl is as good as it gets, or they
are just lazy or too busy to learn another language. So yes, if you
can compile *Perl* to something that would run very fast, there is a
gold mine there, and that's probably an understatement. But if you
want to sell *new*  VHLL to bioinformaticians, I'd be a bit sceptical.

> to speed up the Gigabytes of genetic
> data that they spend *weeks* chugging through with Perl, I'm confident
> that they'd use that compiler. 
> 
> Just my $0.02,
> 
> ~Tomer
> 
> P.S.- I think that a package for Scsh, which gives the user the
> ability to not use parens (like the "Sugar" package), would be a great
> replacement for Perl, IMNSHO ( I know, I know, in my dreams... ).

Open source on top of open source, and you want to charge money for
the result, essentially for the packaging, like Redhat now does. Did I
understand your idea correctly?
From: Feuer
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4055424e$1@news101.his.com>
David Fisher wrote:

> Open source on top of open source, and you want to charge money for
> the result, essentially for the packaging, like Redhat now does. Did I
> understand your idea correctly?

Packaging?  What about training?

David
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkoeqx9k71.fsf@thar.lbl.gov>
Feuer <·····@his.com> writes:

> David Fisher wrote:
>
>> Open source on top of open source, and you want to charge money for
>> the result, essentially for the packaging, like Redhat now does. Did I
>> understand your idea correctly?
>
> Packaging?  What about training?

Lisp-Certified Bioinformaticist? :-)

Seriously, though. You need quite a rep (Cisco, Oracle, Microsoft,
Sun) to put out your own certification...

~Tomer



>
> David

-- 
()
From: Feuer
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4057f32d$1@news101.his.com>
·······@noshpam.lbl.government wrote:

> Feuer <·····@his.com> writes:
> 
> 
>>David Fisher wrote:
>>
>>
>>>Open source on top of open source, and you want to charge money for
>>>the result, essentially for the packaging, like Redhat now does. Did I
>>>understand your idea correctly?
>>
>>Packaging?  What about training?
> 
> 
> Lisp-Certified Bioinformaticist? :-)
> 
> Seriously, though. You need quite a rep (Cisco, Oracle, Microsoft,
> Sun) to put out your own certification...

Not talking about certifications.  I'm talking about training people to 
use Lisp and Lisp software to work more efficiently within their 
particular field.

David
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkd67bm593.fsf@thar.lbl.gov>
Feuer <·····@his.com> writes:
> Not talking about certifications.  I'm talking about training people
> to use Lisp and Lisp software to work more efficiently within their
> particular field.

Sorry about the certification/training mixup.

Well, I can imagine this kind of 'service' as being a part of a
business plan. But I'm skeptical about people using Lisp
directly. Under the hood and on the server, sure. They'll never
know. Or it can be used "through the backdoor" as with Emacs and
AutoCAD, where lisp is used to make the tool more useful and
automated. 

What do you think?

~Tomer

>
> David

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkn06h9k6u.fsf@thar.lbl.gov>
Feuer <·····@his.com> writes:

> David Fisher wrote:
>
>> Open source on top of open source, and you want to charge money for
>> the result, essentially for the packaging, like Redhat now does. Did I
>> understand your idea correctly?
>
> Packaging?  What about training?

Lisp-Certified Bioinformaticist? :-)

Seriously, though. You need quite a rep (Cisco, Oracle, Microsoft,
Sun) to put out your own certification...

~Tomer



>
> David

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pkznah9kc1.fsf@thar.lbl.gov>
·············@yahoo.com (David Fisher) writes:

> ·······@noshpam.lbl.government wrote in message news:<···············@thar.lbl.gov>...
>>
>> Well, here's a "vertical/horizontal" that I'm surprised hasn't been
>> exploited:
>> 
>> Scheme/Lisp has fantastic (meta)linguistic capabilities. One thing
>> that I could imagine would be taking VHLLs which are common in
>> industry, and building compilers for them by first 'compiling' them to
>> Lisp/Scheme. 
>> 
>> When I see how addicted bioinformaticists are to Perl, for example, I
>> see a gold mine of opportunity. If you could offer them a compiler for
>> a VHLL language that they use, 
>
> What bioinformatics are you talking about, specifically? Most
> bioinformatics seems to be either some trivial stuff: accessing online
> databases, parsing of their internal formats, saving data to disk,
> serving data to client apps, for which Perl is a semi-reasonable

Well, you can say this statement about *anything*; bioinformatics,
finance, accounting, etc. The key is understanding the domain, and the
paradigms there that people work around. For bioinformaticists, the Central
Dogma of Molecular Biology is:

DNA -> RNA -> protein

All else is details. :-)

Make that easy, and you're moving in the right direction.

> solution; or it's high-performance stuff like sequence analysis. Some
> do abuse Perl and use it for the performance-demanding tasks, but it's
> only because they truly believe Perl is as good as it gets, or they
> are just lazy or too busy to learn another language. So yes, if you

Both of what you say are true. A combination of historical and
cultural forces. I'll elaborate if desired.

> can compile *Perl* to something that would run very fast, there is a
> gold mine there, and that's probably an understatement. But if you
> want to sell *new*  VHLL to bioinformaticians, I'd be a bit sceptical.

I agree. That's my general feeling.

>
>> to speed up the Gigabytes of genetic
>> data that they spend *weeks* chugging through with Perl, I'm confident
>> that they'd use that compiler. 
>> 
>> Just my $0.02,
>> 
>> ~Tomer
>> 
>> P.S.- I think that a package for Scsh, which gives the user the
>> ability to not use parens (like the "Sugar" package), would be a great
>> replacement for Perl, IMNSHO ( I know, I know, in my dreams... ).
>
> Open source on top of open source, and you want to charge money for
> the result, essentially for the packaging, like Redhat now does. Did I
> understand your idea correctly?

Well, I never claimed that this was my money-making scheme; I only
think that this compiler project would work if it were
proprietary ( and I'm surprised that no one has done it till now
). But I prefer to work in open-source...

~Tomer

-- 
()
From: ·······@noshpam.lbl.government
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <4pky8q19kbo.fsf@thar.lbl.gov>
·············@yahoo.com (David Fisher) writes:

> ·······@noshpam.lbl.government wrote in message news:<···············@thar.lbl.gov>...
>>
>> Well, here's a "vertical/horizontal" that I'm surprised hasn't been
>> exploited:
>> 
>> Scheme/Lisp has fantastic (meta)linguistic capabilities. One thing
>> that I could imagine would be taking VHLLs which are common in
>> industry, and building compilers for them by first 'compiling' them to
>> Lisp/Scheme. 
>> 
>> When I see how addicted bioinformaticists are to Perl, for example, I
>> see a gold mine of opportunity. If you could offer them a compiler for
>> a VHLL language that they use, 
>
> What bioinformatics are you talking about, specifically? Most
> bioinformatics seems to be either some trivial stuff: accessing online
> databases, parsing of their internal formats, saving data to disk,
> serving data to client apps, for which Perl is a semi-reasonable

Well, you can say this statement about *anything*; bioinformatics,
finance, accounting, etc. The key is understanding the domain, and the
paradigms there that people work around. For bioinformaticists, the Central
Dogma of Molecular Biology is:

DNA -> RNA -> protein

All else is details. :-)

Make that easy, and you're moving in the right direction.

> solution; or it's high-performance stuff like sequence analysis. Some
> do abuse Perl and use it for the performance-demanding tasks, but it's
> only because they truly believe Perl is as good as it gets, or they
> are just lazy or too busy to learn another language. So yes, if you

Both of what you say are true. A combination of historical and
cultural forces. I'll elaborate if desired.

> can compile *Perl* to something that would run very fast, there is a
> gold mine there, and that's probably an understatement. But if you
> want to sell *new*  VHLL to bioinformaticians, I'd be a bit sceptical.

I agree. That's my general feeling.

>
>> to speed up the Gigabytes of genetic
>> data that they spend *weeks* chugging through with Perl, I'm confident
>> that they'd use that compiler. 
>> 
>> Just my $0.02,
>> 
>> ~Tomer
>> 
>> P.S.- I think that a package for Scsh, which gives the user the
>> ability to not use parens (like the "Sugar" package), would be a great
>> replacement for Perl, IMNSHO ( I know, I know, in my dreams... ).
>
> Open source on top of open source, and you want to charge money for
> the result, essentially for the packaging, like Redhat now does. Did I
> understand your idea correctly?

Well, I never claimed that this was my money-making scheme; I only
think that this compiler project would work if it were
proprietary ( and I'm surprised that no one has done it till now
). But I prefer to work in open-source...

~Tomer

-- 
()
From: Geoffrey S. Knauth
Subject: Re: Business Opportunities for SF Bay Lispers?
Date: 
Message-ID: <1gb601o.311jwipo8gblN%geoff@knauth.org>
<·······@noshpam.lbl.government> wrote:

> When I see how addicted bioinformaticists are to Perl, for example, I
> see a gold mine of opportunity. If you could offer them a compiler for
> a VHLL language that they use, to speed up the Gigabytes of genetic
> data that they spend *weeks* chugging through with Perl, I'm confident
> that they'd use that compiler. 

There was a summary of Scheme and/or Lisp use in Bioinformatics at the
International Lisp Conference 2003.  I need to locate my notes to find
the speaker's name.

-- 
Geoffrey S. Knauth | http://knauth.org/gsk