Minutes of the Second Developer Conference (DevCon2, 2009)
This page hosts the minutes as provided by the participants.Following topics were covered:
- Base Data Structures / Get-Rid-Of-List
- External Library Integration
- Library Consolidation
- Consolidation NASL and NASL libraries
- Nmap integration
- OpenVAS Manager
- Tool-chain Reliability Policy
- Virtual host scanning
OpenVAS DevCon2 July 10, 2009 Discussion: Base Data Structures / Get-Rid-Of-List Moderator: Jan-Oliver Wagner Minutes: Michael Wiegand Attending: Matthew Mundell Felix Wolfsteller Tobias von Keler Ahmad Rooyani Michael Wiegand Along with the Nessus code base, the OpenVAS project has inherited a number of bad design choices regarding the code structure from the Nessus project. Some of the bad design choices are easy to identify, while others are not. Work has already begun on replacing of badly implemented and hard-to-maintain code parts with glib (http://library.gnome.org/devel/glib/); one example for this the command line parsing in openvas-client, openvas-server and openvas-nasl. The goal of these efforts is to reduce the code base and to get clean and well documented code which will be easier to maintain. Easy maintenance was agreed to be an import goal because even though computing power and capacity always increases, developer's minds generally don't. One way towards this goal is to replace individual, home-grown and often multiple implementations of generic concepts with implementations from standard libraries. Glib was chosen as a basis for this because it has a large user base (e.g. through the GNOME and GTK+ projects) and is widely available. The implementations in glib are much more likely to get multiple reviews for code quality and security than individual implementations in the Nessus/OpenVAS source code. The first steps on this way is to identify existing data structures in the OpenVAS code base and evaluate the use of those data structures and potential benefits of using glib data structures in their place. One of the most widely used data structures in the OpenVAS code base is the so called "arglist" structure, which is roughly similar to a hash table. It is extremely flexible and useful for dynamic data structures, but tends to be used as well in places where static structures would be more reasonable. In those places, the use of arglists makes maintaining and debugging the existing code unnecessarily difficult. It was agreed that work in this area will begin by evaluating the use of arglists in the different OpenVAS modules and to identify places where the use of simple structs or other established structures would make more sense. One of the places where improving the data structures has started is the server side NVT meta data cache. This has already resulted in reduced disk usage for the server cache. Good starting points for future work are most likely the scheduler_plugin and name_cache structures on the server side and the listnotebook structure on the client side. The automatically generated Doxygen documentation provides a good collection of existing data structures. Especially for the existing structures, the documentation should be improved to understand where changes make the most sense. Another point is the creation and implementation of guidelines for internal API functionality. The goal here is a more layered approach and to get rid of many instances of bypassing, duplication and cross-referencing existing in the code right now. This would also make developing a testing infrastructure easier. As for the "get-rid-of-list", it was agreed that the top two priorities should be replacing home-grown implementations with established functionality as described above on one side and removing no longer needed functionality on the other side. Work has begun on identifying these areas. The following files have been identified as prime targets for removal since they contain obsolete or redundant functionality or can be more or less easily replaced by established data structures. In openvas-libraries: - harglists.c, harglists.h - hlst.c, hlst.h - rand.c, rand.hBack to overview.
OpenVAS DevCon2 July 9, 2009 Discussion: Community/PR/Marketing Moderator: Jan-Oliver Wagner Minutes: Felix Wolfsteller, merged with Tim Browns Attending: all Slides from Kost and Jan-Oliver Wagner were taken as an entry point. We have good tools, code review, ChangeLogs etc, but a lot of work that needs ongoing effort is done by only a few hands. There are many other things and kinds of involvements that we would like to do, but we simply cannot because of the lack of (free) hands. Work to be done --------------- Generally we identified three types of work that partly could also be done by non-coders or people who are not completely familiar with all the details of OpenVAS: (Of course, "Which task belongs to which category?" can be discussed endlessly) There is * work that needs constant attention like Mailing list administration Certain web site updates Bugtracker maintenance Testing before (daily) NVT feed updates IRC channel administration * Work that has to be done in intervals/from time to time (e.g. before releases) Translations Certain web site updates Posting news and updates on online communities like freshmeat, ohloh, twitter, identi.ca FAQ maintenance Build (and probably functional) testing before releases Talks on fairs and conferences * Work that has to be done just once New website design Setup of new facilities, e.g. for build testing Certain web site updates Videos tutorials Write articles, tutorials There are two questions: 'How to recruit somebody to do all this?' and 'Do we need all this?'. Where there is the need for help, we have to communicate the need better (e.g. has to be stated on the website). Also, a list of responsible persons should be maintained on the website. Ideally, a vice is found and mentioned there too. Tim Brown proposed to raise all who have svn commit access to IRC channel (#openvas) OPS to allow channel maintenance to be better shared. A list of possible jobs (Junior Jobs) could be created and posted to the OpenVAS web site, this might include the tasks listed above. We recognized that we do speak on conferences, but are not nicely organized about that. As k0st states, it lacks a website area for it. Fundraising/Donations --------------------- If nobody can be found volunteering for the tasks mentioned above, it might be worth looking into fundraising to get these things done. Tim informed that we can take donations via SPI (this includes online donations via Click&Pledge), this has been set up but is not used. Tim will clarify ways funds can be used as he has concerns about work for hire with respect to SPIs legal status. Jan notes that SPIs partner organisation can also take donations in Europe and highlights first kind-of-fundraising success with the Workshop that preceeded the OpenVAS Developer Conference 2. (Automated) Testing ------------------- Regarding testing before Feed update there was a discussion whether automatized tests are possible here. Coming from there, we realized that there are two kinds of needs and expectations: who runs more bleeding edge open source software probably does not expect that everything always works perfectly and might accept changes more easy than the one who uses OpenVAS as part of a toolchain. It was discussed whether the openSUSE build service should be used for testing before release. As it looks as if automated functional tests could be integrated as well and packaging for many systems and distributions could be created 'with one click', it is worth looking deeper into it. Some expertise about automated testing is available via dn-systems (Dirk). Website ------- Most community work would be reflected on the webpage, so here a wishlist of unsorted webpage edits: Maintained 'fairs and conference list' Linklist (do we need additional grouping here, e.g. tutorials, discussions, related tools?) List of volunteers/responsibles A 'we need help' statement A proper definition of the OpenVAS NVT Feed New design? FAQ [recently picked up and initiated by Geoff Galitz] Junior Jobs? Connections ----------- It was agreed to apply for membership at ocert[.org]. Connections to other communities and projects were discussed, they follow in an unsorted list * oCERT (http://www.ocert.org/) Tim to contact about OpenVAS becoming a member project. * OWASP (http://www.owasp.org/) Jan will follow this up with .de OWASP folk. Tim noted that he knows several of the OWASP local chapter leaders (.tr, .it) as well as a number of OWASP project leaders, one of the authors of the OWASP testing guide and a number of the founders & current board. * Planet OpenVAS (http://planet.openvas.org/) Needed fixing (now done), any developer, contributing organisation can have their RSS feed added? Speak to Tim about this. * Twitter/identi.ca (http://twitter.com/openvas & http://identi.ca/openvas) Kost and Tim to work on integrating it with the IRC bot. Kost provides bot source to Tim. * ohloh (http://www.ohloh.net/p/openvas/) Felix and Kost to persue this. Looks like a success. [it was shortly after the DevCon] Off Topic --------- Tim mentioned Phrasendrescher (http://labs.portcullis.co.uk/application/phrasen-drescher/) as a possible replacement for hydra (developed by Tim's colleague at Portcullis and BSD licensed). Not as much support for other protocols yet, but API is clean, so easy to add.Back to overview.
OpenVAS DevCon2 July 9, 2009 Discussion: External Library Integration Moderator: Tim Brown Minutes: Chandrashekhar Basavanna Attending: Lukas Grunwald Thomas Rotter Sven Wurth Thomas Reinke Chandrashekhar Basavanna Tim Brown Geoff Galitz Goran Licina Goran Zivkovic (see also OVDC2_nasl.txt) SSH Library Integration SSH protocol is currently implemented in NASL. It is not easily understandable and extensibility is hard. There was a proposal to integrate with the standard implementation instead of re-inventing the wheel. There was a question raised by one of the member specifically about the reason for implementing a change and what are the problems that the project members are currently facing. However it was made clear that the change is more from the point of moving to standards and also to ease the maintainability. It was agreed by all the members present that such integration would help the project focus on core things. Making use of the existing implementation would assist in easier feature integration and less maintenance. An exercise to evaluate various existing SSH implementation must be carried out. The following well known libraries are available currently, - OpenSSH (www.openssh.org) - libssh2 (http://www.libssh2.org/wiki/index.php/Main_Page) - libssh (www.libssh.org) - LSH (http://www.lysator.liu.se/~nisse/lsh/) The libraries have to be evaluated based on the below criteria, - Support for various distribution - Development support, user and development community base - Size of the library - License compatibility Decision: Geoff agreed to evaluate each library and come out with results that will be put forward in the form of CR. SMB support: Currently WMI support is under implementation as per CR #25. There's still a drawback exists to write Windows based remote/local checks that are based on SMB/DCERPC packet crafting. There is no means currently available and smb_nt.inc is an outdated implementation and doesn't work with the newer authentication methods. The WMI implementation is based on the Samba code base. Since there's an ongoing work and also since there's no suitable alternative available to provide packet crafting level API's for NASL writers, it was decided by all members that it is easier to integrate Samba and expose the low level functionality. Decision: Chandra to create/update the CR for the Samba implementation. Other protocols Other implementations that are complex in nature could follow the same model as SSH and SMB. Each such proposals have to be taken through the Change Request process.Back to overview.
OpenVAS DevCon2 July 11, 2009 Discussion: Library Consolidation Moderator: Jan-Oliver Wagner Minutes: Jan-Oliver Wagner Attending: Matthew Mundell Felix Wolfsteller Tobias von Keler Ahmad Rooyani Michael Wiegand (Results in CR#38 http://www.openvas.org/openvas-cr-38.html)Back to overview.
OpenVAS DevCon2 July 10, 2009 Discussion: Consolidation NASL and NASL libraries Moderator: Thomas Reinke Minutes: Tim Brown Attending: Lukas Grunwald Thomas Rotter Sven Wurth Thomas Reinke Chandrashekhar Basavanna Tim Brown Geoff Galitz Goran Licina Goran Zivkovic Internal infrastructure * API calls For now we use script_id, script_oid is moving forward... communication regarding moving to use it. Jan and Thomas to coordinate * File system Scripts to be ordered in (sub)directories. Different approaches were discussed. Agreement on directory names equalling family names + an include directory (both feed and svn). * Obsolete plugins Why? - Legal reasons - Potentially malicious code How? - Minimum note (log_warn?) - We will not remove plugins for outdated systems * Avoiding duplication Sometimes what seems to be duplicates are none (e.g. remote vs local). -> Make a note in textfile managed under svn for each new. * Coordinated effort of implementing NVTs for CVEs Currently a plain text file is used. Communication of process (svn), mailing for others. External integration * CPE, OSVDB, CVSS, BID, CVE Should meta data be moved to a database or external files? Besides a general agreement to separate the meta data from the NVTs, many open questions were found: How? What about the client? In the first stage just the NASLs themselves? Gives i18n, easier updates. Chandra (was?) volunteered to create a Change Request to handle this change. References to CVE etc are recommended but not mandatory. Protocol implementations: in NASL or part of NASL API? With the prime example of SSH, there is and will be functionality implemented in NASL that could be handled by other already existing libraries. This can come with obvious advantages (better support, implemented by experts in their fields, etc. etc.), but also raises a couple of issues: * SSH Questions about a non-NASL ssh implementation: What about Licensing? Support? Library? Distros? Geoff (was?) volunteered to investigate the issue. * SMB Chandra will raise a Change Request to integrate libsmb. * General guidelines Where they are used as a transport, it will be recommended that providing the protocol is of sufficient complexity a CR will be raised to use the appropriate domain expert developed solution. (see OVDC2_external_libs.txt) NASL harmonisation Moderator: Thomas Reinke * Unit tests All .inc files should be documented. Thomas is going to do some analysis of what functions are most important (hack the NASL compiler) both by static and runtime. (Results were posted on the mailing list ). * Documentation Jan will look at ways to use doxygen for NASL files. * Vhost This is a killer feature, as a first step we will look at how to specify a vhost for a given IP, but intention is to support multiple vhosts. (see OVDC_vhosts.txt)  http://lists.wald.intevation.org/pipermail/openvas-devel/2009-July/001629.htmlBack to overview.
OpenVAS DevCon2 July 10, 2009 Discussion: Nmap integration Moderator: Jan-Oliver Wagner Minutes: Jan-Oliver Wagner, Matthew Mundell Attending: all * Slides submitted by Kost to the openvas-devel mailing list were used to initiate the discussion. * Standard usage of Nmap for OpenVAS at the momement is port scanning. * Efficiency/Memory consumption of Nmap * efficiency depends on usage * experienced: nmap bad at multiple-processes, bad for specific (multiple) IPs (thats how OpenVAS uses Nmap). Better on Class C networks. * Executable vs. library * This question is discussed in Eric's document and some more emails on openvas-devel. * This question will not necessarily solve the memory problem. * Prepass of Nmap * This is already practiced e.g. by Tim and Tom. * It does not solve the memory problem. * This introduces a time window where the network can change. Of course, the network can change during test, whatever method is chosen. * Perhaps make the prepass optional alternative to per-host. * Scans can hang for a while and prevent prepass from finishing fast. * Chunking Nmap scan * To cope with the memory problem it seems to be a valid option to chunk the task for Nmap, e.g. chunks of 1000 ports. * It appears not doable to handle such chunks in the plugin scheduler. * Can the chunking size be a preference for nmap NASL script? This might make sense but will not allow to start some NASL scripts earlier (because of the way the scheduler works). * Agreed: Try out chunked version of Nmap NASL-wrapper to see whether it improves performance. * Separate Portscan out of OpenVAS * An even more radical approach than prepass would be to separate portscanning out of OpenVAS. * It was agreed that this does bring any real advantage and introduce more problems for the handling. * After all it is already possible (i.e. you can import a nmap scan result). * Network discovery * Host discovery could preced port scan phase. * Have to wait for it before scan starts. * Reliability is debatable, but fun for a start * in general this could only be informational. * Results can go into KB and be used by nasl scripts and especially refined by nasl scripts (-> inventory). * Could provide a start for scan target list. * Can be problem with hosts you shouldn't scan. * Integration NSE * There are about 60 LUA scripts from the Nmap project * Question is wether we want to add yet another language but in fact it is only about offering Nmap scripts to OpenVAS users and not have OpenVAS NVT team use LUA. * The NSE scripts are of little use for pen testers. * The output should be parsable. * General agreement to postpone until stable. * Service Detection * find_services.c: C-module, hopeless out of date, broken concept in terms of upating patterns. Has ca. 100 signatures while nmap has > 5000. * Generally agreed we need a NASL script to handle service detection. * nmap can do the service detection and fill the KB accordingly. * Essential: should the signatures be distributed via Feed to deliver most uptodate easily? If so, it needs to be clearified which sigfile will be used (system provided one, the feed one, individual one). After all, should be configurable. * The OS's sigfile might be very much out of date - on the other hand it might be specially crafted (does this happen in reality?). * How do nmap people handle to keep signatures uptodate? * Need to talk to the nmap people on this idea.Back to overview.
OpenVAS DevCon2 July 11, 2009 Discussion: OpenVAS Manager Moderator: Jan-Oliver Wagner Minutes: Geoff Galitz Attending: all New model introduced: Graphic to be supplied. The primary change proposed here is to add a new layer called the OpenVAS Manager. This would sit between the client and the scan server. The primary purpose is to offload administrative tasks and other non scanning related tasks from the openvas-server component. At the same time this is a good time to remove old un-used code and plan for more flexible scanner management. Informational points follow: Proposed benefits: - security: move bulk of operations to non-priviledged binary - scalability: manage multiple scanners from central location - scalability/reliability: store reports manager/server side - reliability/efficiency: reports stored in SQL database - reliability: manager can sanity check requests and client connections - reliability/efficiency: remove un-used code from current openvasd - reliability/efficiency: reduce OTP complexity Concerns/Drawbacks: - Workflow: New model interferes with workflow in customized environments (for example, users and script that interact diretly with the server, bypassing the current client (see note on compatibility). - Workflow: Highly customized environments that implement the current OTP protocol directly at any point (see note on compatibility). - Security/privacy: With multiple data stores (reports saved on the manager and optionally on the client) data privacy becomes more complicated. How many versions of the same data must be secured or securely deleted? How can this be verified? It was agreed that the concerns were not significant enough to warrant any changes to the architecture at this time. Details: - Miscellaneous: Reducing OTP complexity ultimately means reducing the supported command set to four commands: long attack, preferences, plugins order and attached file. - Compatibility: The current client/server model is still supported temporarily. When a client connects to the manager using the OTP protocol, the manager acts as a proxy to the openvas server. Hence backward compatibility is maintained until the current form of the OTP protocol is ultimately retired. Ultimately any customized environments that use the OTP protocol directly will need to be ported to the new model, which is more work for the affected users. - Near term goal: report management, removing un-used/bad code - Mid term goals: distributed scanner management, enhanced user management, OTP2 TBD Items: NVT syncing across a distributed configurationBack to overview.
OpenVAS DevCon2 July 11, 2009 Discussion: Tool-chain Reliability Policy Moderator: Jan-Oliver Wagner Minutes: Jan-Oliver Wagner Attending: Matthew Mundell Felix Wolfsteller Michael Wiegand Summary: * The OpenVAS Team already practices a systematic method on release management for all components of the OpenVAS tool chain. * However, it is not really documented and transparent for the public. But this could attract more users who do care about release management, especially when they connect their tool chains to OpenVAS. * It was proposed to have a web page that collects information about: * Procedure for a new release * How long is a release supported (not necessarily a time, it could be dynamic criteria) * Procedure to retire a release. * Name all interfaces that are taken care of in terms of being unchanged/ compatible). Definitely NBE, OMP, OTP, openvasrc belongs to this group. However, there might be more interfaces like the command line options of OpenVAS-Client and all these need to be named and described explicitely. * Stability can be suported from the source code side as well, especially with unit testing. We can not implement unit testing for all of the code rapidly, it likely will take a very long time. Those elements of OpenVAS that are the most important interfaces to other tools should be covered first when starting to implement unit testing. * The procedures should be applied and practiced the first time with the version that retires OpenVAS 1.0.Back to overview.
OpenVAS DevCon2 July 9, 2009 Discussion: Virtual host scanning Moderator: Thomas Reinke Minutes: Chandrashekhar Basavanna Attending: Lukas Grunewald Thomas Rotter Sven Wurth Thomas Reinke Chandrashekhar Basavanna Tim Brown Geoff Galitz Goran Licina Goran Zivkovic (see also OVDC2_nasl.txt) OpenVAS doesn't support scanning virtual host, web servers, it scans only the given IP or the Host. It was agreed by all members in the discussion group that this feature is a real value add and this needs to be supported. Three requirements for implementing virtual hosts support were discussed, - Virtual Host discovery - Update plugins to support virutal host scanning - Reporting The following approaches were discussed and debated, I. Virtual Host discovery 1.Automatically discovering all the virtual hosts through web mirroring techniques: Though there are techniques available, they are not accurate in listing all the virtual hosts. 2.Manual method of entering all the virtual hosts in a text box or provide through the file. Decision: Option #2 II. Update plugins to support virtual hosts scanning 1.Loop through all virtual hosts and perform the scanning for each host. Ideally each web related plugin should be updated to include, vhost = get_kb_item(¿VirtualHosts¿); foreach dir cgi_dir DO_CHECK Also wherever HTTP requests are being constructed, we should update the header to contain Host: virtual_host If vhost value returned is large in number, the plugins might run for a very long time. Plugin timeout value might need to be set to a large number. When you call get_kb_item and if it returns multiple values, openvas will fork mutliple processes. Fork limit might need to be set in order to limit the number of forks. 2.Instead of looping through all the virtual hosts and scanning, administrator can enter the virtual hosts directly for scanning in the scan_target. In this case, plugins need not be updated to scan through all virtual hosts. But, each plugin need to be audited to check if, Host: virtual_host is handled appropriately. Decision: Option #2 III. Reporting When multiple virtual hosts are scanned on a single system, there is a possibility that same report might be listed multiple times. 1.Reports have to be created separately for each scanned virtual host. This can be achieved by automatically prefixing virtual host name to the report description or by enhancing security_ API's to include an additional parameter called v_host Example: security_warning(port, data, v_host) 2. the user enters the virtual hosts as scan targets, reports will anyway be created separately. Decision: Option #2Back to overview.