I've just released version 1.2 of the Evolving Connectionist System Toolbox. The toolbox is a collection of command-line applications that implement several algorithms associated with Evolving Connectionist Systems.
This release of the toolbox adds several tools for Nominal-scale Evolving Connectionist Systems (NECoS). This is a modification of the ECoS algorithm so that it can model nominal-scale data directly. While most ANN require nominal-scale data to be transformed into a binary or orthogonal representation, NECoS handle nominal-scale data "as is". This means that new symbols in the input stream can be easily captured by the ANN. In contrast, a binary representation scheme cannot handle symbols that have not been considered, as that would require adding input neurons.
I have also added a tool that compiles trained SECoS ANN into code. So far, the compiler will output code in Python, C++ and C#, where each SECoS compiled is represented as one class. The compiled SECoS are recall-only, and cannot be further trained. There are examples included in the toolbox of how to use the generated code for each of the three languages the tool outputs.
The last group of enhancements (apart from a few minor bug-fixes) are tools that will convert to and from ARFF format data files and the format used by the ECoS toolbox.
These tools are all free to use, but I do request that if you use them in your research, you give credit to them (and me).
Showing posts with label software. Show all posts
Showing posts with label software. Show all posts
Tuesday, October 20, 2015
Wednesday, August 5, 2015
List of Free/Open-Source Computational Intelligence Software
Evolutionary Computation
JCLEC - Java Class Library for Evolutionary Computationhttp://jclec.sourceforge.net/
KEEL - Knowlege Extraction based on Evolutionary Learning
http://sci2s.ugr.es/keel/
jMetal - Metaheuristic Algorithms in Java
http://jmetal.sourceforge.net/
Fuzzy Systems
Xfuzzy - Fuzzy Logic Design Toolshttps://forja.rediris.es/projects/xfuzzy/
FisPro - Fuzzy Inference System Professional
https://www7.inra.fr/mia/M/fispro/fispro2013_en.html
GUAJE - Generating Understandable and Accurate fuzzy models in a Java Environment
https://www.softcomputing.es/guaje/
Neural Systems
SNNS - Stugggart Neural Network Simulatorhttp://www.ra.cs.uni-tuebingen.de/SNNS/
PyNN
http://neuralensemble.org/PyNN/
NEURON
https://www.neuron.yale.edu/neuron/
NEST-Initiative
http://www.nest-initiative.org/?page=Software
PCSIM
http://sourceforge.net/projects/pcsim/
The Brian Spiking Neural Network Simulator
http://briansimulator.org/
Neuro-Fuzzy
NEFCLASS - Neuro-Fuzzy Classificationhttp://fuzzy.cs.uni-magdeburg.de/nefclass/
FriDA - Free Intelligence Data Analysis Toolbox
http://www.borgelt.net/frida.html
KNIME
https://www.knime.org/
Thursday, August 16, 2012
ECoS toolbox API?
Is anyone interested in an ECoS DLL / API? I'm thinking of wrapping the functionality in the command-line ECoS Toolbox up in a DLL and providing an API so that people could include EFuNN and SECoS in their own programs. Is this a worthwhile use of my time? Would anyone use it?
Labels:
dear Internet,
ECoS,
EFuNN,
SECoS,
software
Friday, November 18, 2011
Google Scholar Citations
Google has just launched a useful tool for academics: Google Scholar Citations. This is a service on top of Google Scholar that allows you to track the number of citations each of your publications has received. One of the metrics by which academics are judged is the number of citations their publications have received, the theory being that good and useful papers will be cited more than papers that are not useful, or good. This has been encapsulated by measures such as the h-index: to have an h-index of n, you must have at least n-papers that have been cited at least n-times each. It is useful for things like grant applications to be able to quote the number of citations you have received and your current h-index, insofar as convincing the grants committees that you can do the work you propose.
It was possible to track citations with Google Scholar in the past, and to calculate your h-index manually, but this could be a bit laborious and error-prone: Scholar Citations makes it a lot easier. I was impressed to see that even with a common name like mine (there are a lot of Michael Watts in the world, and some of them are also academics) the software found almost all of my publications - there are a few that aren't available online yet - and to find the citations to them. I was quite pleased to find that I had a few more citations than I thought.
It was possible to track citations with Google Scholar in the past, and to calculate your h-index manually, but this could be a bit laborious and error-prone: Scholar Citations makes it a lot easier. I was impressed to see that even with a common name like mine (there are a lot of Michael Watts in the world, and some of them are also academics) the software found almost all of my publications - there are a few that aren't available online yet - and to find the citations to them. I was quite pleased to find that I had a few more citations than I thought.
Labels:
research craft,
software
Tuesday, July 26, 2011
Software development in science
There are fundamental differences in the way in which scientists and software engineers create software. Here are two posts on two separate blogs, arguing their respective cases about the difference between the software created by scientists and the software created by software engineers. The first argues that the differences are due to culture: scientists view software as a tool that just needs to work, so don't mind doing it quickly and in a less-than-maintainable manner. Software engineers see software as a product, and so spend the time and effort to make software that is maintainable. The second, on the other hand, argues that it is not a cultural difference, but an issue of reproducibility. Being able to reproduce results is extremely important in science - for example, a lack of reproducibility is in part how the fraudulent results of Jan Hendrik Schon were uncovered. Thus, software need to be reproducible and therefore, produce trustworthy results.
As both a software engineer and a working scientist, I tend to agree more with the second argument, but I think that the major problem is that some scientists who code are going too far outside of their area of expertise.
It takes education and a lot of experience to be able to write good code. I've been writing software for more than sixteen years now, and I think I am finally getting to the point that my coding skills are adequate. But that's after earning an honours degree in the field, after spending a couple of years working closely with a truly gifted programmer, and many more years writing software for a wide variety of applications. When I first started writing scientific software, the code I produced wasn't very good: it ran OK, and produced reasonable results, but it was pretty clunky, being very difficult to adapt to other projects. I learned very quickly after that to design code for modularity and replicability. Reusable code,of course, is superior to code that is purpose-built each time. Apart from making it easier and quicker to produce new software, it is far more reliable: bugs are more likely to have been noticed and fixed in the earlier software.
I often tell my co-workers (who are all very good ecologists) that it is very easy to write bad software and that writing good software is hard. So, even though I spend my days writing software to process the output of some fairly painful software (that was obviously written by non-engineers), even though it takes me more time than people think it should, I still spend the time to build it according to the principles I learned as a software engineer. And every time I do that, the effort pays off later on, because I am always able to adapt my code to a new application with minimal effort, even though that application had not even been thought of when I first wrote the code.
I know that this sounds terribly snobbish, even elitist, but I look at it this way: If you want to design a reliable bridge, you need a civil engineer. If you want to design a reliable car, you need a mechanical engineer. If you want to write reliable software, you need a software engineer.
I think this problem of scientists over-reaching into code writing occurs because writing code is so easy to do, and because software can fail in subtle ways. Building a bridge takes a lot of material and manpower, and if it is not designed properly, it falls down. Building a car takes a lot of time and components, and if it is not designed properly, it crashes (or doesn't run at all). With software, however, anyone can download and install a scripting language like Python or a package like R and knock out a script that seems to do what they want. It also means that anyone can knock out numbers that look reasonable but are in fact completely wrong.
If you want good software, you need a software engineer. It's an investment that pays off in the long run.
As both a software engineer and a working scientist, I tend to agree more with the second argument, but I think that the major problem is that some scientists who code are going too far outside of their area of expertise.
It takes education and a lot of experience to be able to write good code. I've been writing software for more than sixteen years now, and I think I am finally getting to the point that my coding skills are adequate. But that's after earning an honours degree in the field, after spending a couple of years working closely with a truly gifted programmer, and many more years writing software for a wide variety of applications. When I first started writing scientific software, the code I produced wasn't very good: it ran OK, and produced reasonable results, but it was pretty clunky, being very difficult to adapt to other projects. I learned very quickly after that to design code for modularity and replicability. Reusable code,of course, is superior to code that is purpose-built each time. Apart from making it easier and quicker to produce new software, it is far more reliable: bugs are more likely to have been noticed and fixed in the earlier software.
I often tell my co-workers (who are all very good ecologists) that it is very easy to write bad software and that writing good software is hard. So, even though I spend my days writing software to process the output of some fairly painful software (that was obviously written by non-engineers), even though it takes me more time than people think it should, I still spend the time to build it according to the principles I learned as a software engineer. And every time I do that, the effort pays off later on, because I am always able to adapt my code to a new application with minimal effort, even though that application had not even been thought of when I first wrote the code.
I know that this sounds terribly snobbish, even elitist, but I look at it this way: If you want to design a reliable bridge, you need a civil engineer. If you want to design a reliable car, you need a mechanical engineer. If you want to write reliable software, you need a software engineer.
I think this problem of scientists over-reaching into code writing occurs because writing code is so easy to do, and because software can fail in subtle ways. Building a bridge takes a lot of material and manpower, and if it is not designed properly, it falls down. Building a car takes a lot of time and components, and if it is not designed properly, it crashes (or doesn't run at all). With software, however, anyone can download and install a scripting language like Python or a package like R and knock out a script that seems to do what they want. It also means that anyone can knock out numbers that look reasonable but are in fact completely wrong.
If you want good software, you need a software engineer. It's an investment that pays off in the long run.
Labels:
research craft,
software
Thursday, April 14, 2011
FuzzyCOPE 3
After many years of being absent from the web, the FuzzyCOPE 3 website is now back online.
I developed FuzzyCOPE 3 at the University of Otago in 1998-1999. FuzzyCOPE 3 is an integrated environment for data processing and fuzzy-neural network modelling. After I left Otago, it was taken off of the web, but I've noticed that people are still searching for it. So, for historical reasons, I have decided to put it back up. There won't be any more bug fixes or updates, but hopefully people will find it useful. Also, there probably won't be a FuzzyCOPE 4, unless someone wants to pay me to do it.
The new address for FuzzyCOPE 3 is http://software.watts.net.nz/FuzzyCOPE3/
A paper describing FuzzyCOPE 3 is available here. The complete citation for this paper is:
1999 - Watts, M., Woodford, B., and Kasabov N., FuzzyCOPE - A Software Environment for Building Intelligent Systems - the Past, the Present and the Future, in: Emerging Knowledge Engineering and Connectionist-based Systems, Proceedings of the ICONIP/ANZIIS/ANNES’99 Workshop "Future directions for intelligent systems and information sciences", Dunedin, 22-23 Nov.1999, N.Kasabov and K.Ko (eds) 188-192.
I developed FuzzyCOPE 3 at the University of Otago in 1998-1999. FuzzyCOPE 3 is an integrated environment for data processing and fuzzy-neural network modelling. After I left Otago, it was taken off of the web, but I've noticed that people are still searching for it. So, for historical reasons, I have decided to put it back up. There won't be any more bug fixes or updates, but hopefully people will find it useful. Also, there probably won't be a FuzzyCOPE 4, unless someone wants to pay me to do it.
The new address for FuzzyCOPE 3 is http://software.watts.net.nz/FuzzyCOPE3/
A paper describing FuzzyCOPE 3 is available here. The complete citation for this paper is:
1999 - Watts, M., Woodford, B., and Kasabov N., FuzzyCOPE - A Software Environment for Building Intelligent Systems - the Past, the Present and the Future, in: Emerging Knowledge Engineering and Connectionist-based Systems, Proceedings of the ICONIP/ANZIIS/ANNES’99 Workshop "Future directions for intelligent systems and information sciences", Dunedin, 22-23 Nov.1999, N.Kasabov and K.Ko (eds) 188-192.
Labels:
software
Wednesday, June 23, 2010
New Website on Evolving Connectionist Systems
I've just launched a website on Evolving Connectionist Systems (ECoS). ECoS are a class of constructive neural networks that learn very quickly and that do not suffer from catastrophic forgetting. The website has overviews of several ECoS algorithms, a comprehensive listing of the ECoS literature, and also links to the ECoS Toolbox, which is a collection of Windows command-line tools that implement several ECoS algorithms.
Update: this website is now at http://ecos.watts.net.nz/
Update: this website is now at http://ecos.watts.net.nz/
Tuesday, January 12, 2010
AI in Second Life
The IEEE Computer Society is building an AI learning centre on its island in Second Life. It's intended to be a place where AI technologies can be shown off to the public, including the use of intelligent virtual guides (the first of which is based on the famous strategist Sun Tzu, author of the Art of War).
It strikes me as a good idea, and a fairly safe way of testing out technologies in a fairly real-world setting (for various values of "safe" and "real world"). I wonder how much cross-over there will be between this project and the AI in games research community?
Perhaps I will be taking a closer look at Second Life in the future.
It strikes me as a good idea, and a fairly safe way of testing out technologies in a fairly real-world setting (for various values of "safe" and "real world"). I wonder how much cross-over there will be between this project and the AI in games research community?
Perhaps I will be taking a closer look at Second Life in the future.
Labels:
general CI,
software
Tuesday, December 22, 2009
ANN on GPU
There are quite a few publications now on implementing ANN on Graphics Processing Units (GPU) (see for example here, here and a brief review here). There are even a couple of programming libraries available that do this. The great advantage of using GPU are, of course, that GPU are massively parallel while being relatively cheap and ANN are inherently parallel models (this cheapness lends them to being used for other high-performance projects and GPU-based supercomputers are becoming more widely used, for example here and here).
I have yet to see, however, any publications describing constructive neural networks implemented on GPU. I suspect this may be because many constructive algorithms require some steps that are difficult to paralleise, such as a finding the maximum activation in a layer of neurons (which can be done in log2(n) iterations if you compare the values in pairs).
That said, I do see a very bright future for ANN research in using GPU. Definitely something I will be following more closely in the future.
I have yet to see, however, any publications describing constructive neural networks implemented on GPU. I suspect this may be because many constructive algorithms require some steps that are difficult to paralleise, such as a finding the maximum activation in a layer of neurons (which can be done in log2(n) iterations if you compare the values in pairs).
That said, I do see a very bright future for ANN research in using GPU. Definitely something I will be following more closely in the future.
Labels:
neural networks,
software
Subscribe to:
Comments (Atom)