The Mechanics Prototype Shop at Nokia campus was creating prototypes for multiple Nokia sites and their ordering communication was based on E-mail. Customers were most of time sending orders with incomplete information which caused unnecessary back-and-forth communication. Additionally proper archiving (outside of email), searchability and visibility to orders were lacking.
The Prototype Order System was designed as a web application storing information on a searchable database. A form collecting complete information for order was designed. Additionally, short email notifications were sent on each order to the protoshop manager to look at job details on the web tool.
The Web-based Prototype Ordering System fixed the obvious inefficiencies in the old e-mail based ordering process. Orders in new system consisted of complete request information and provided visibility on the progress of the work order to the customer.
The Nokia Corporate Web Phonebook allows people to perform searches by different attributes to access people's telephone numbers, organizational information, sites and address information.
The Phonebook application was developed in Perl using advanced perl data structures to store configuration and meta information as well as the runtime result sets from database searches. Backend data was stored in PH Database (open source, by Univ. of Illinois). A reusable module for handling client side PH database result sets was created in the process.
The Phonebook is the core of corporate contact information and experiences roughly 80 000 hits per day. Users often refer it to "most important app" on the intranet.
The corporate SMS messaging gateway created in conjunction with the corporate phonebook and allows people to send out SMS messages from the web user interface of the phonebook. The system also allows querying the corporate phonebook by sending a simple SMS message with a query command and query parameter keyword to a dedicated number. The results of query is sent back to the mobile phone in standardized "Business Card (vCard)" -format in case of a single match (informative free form message in case of multiple matches).
The complete architecture of the system contained SMS daemon developed in C, capable of being compiled on and run on any POSIX compliant platform (e.g. Linux, Solaris), phonebook messaging server written in Perl and web frontend - also written in Perl. For the SMS messaging, the system utilized Nokia GSM modem hardware connected to a UNIX server with serial RS-232 connection. Interfacing the modem HW from UNIX-based OS took place using POSIX termios serial device interface API using polling to track incoming messages while concurrently operating the outbound link.
The corporate SMS messaging system became rapidly very popular and was able to handle thousands of messages per day. Later the system was converted from a GSM modem to using direct TCP/IP -based communication to SMSC (short messaging center), retaining the user interfaces and query interfaces created during the project.
With Nokia switching to Microsoft Exchange-based mail servers, there was a need to allow the monitoring of their mail transfer queues on a low level. The reports were available on an FTP server, but improved visibility and easier readability via a web-base user interface were required.
A web application was built using Perl Net::FTP module to fetch the log files from the FTP server. The log files were parsed into an internal representation, which also allowed them to be output in clear, readable tabular format (combining information from multiple separate files fetched from the server).
Files that were previously only reachable by FTP in rudimentary format were now viewable by just a few clicks on hyperlinks via an easy-to-use web application. The old tedious way practically kept people from monitoring queues, while the new web viewer not only saved viewing time and effort, but also gained momentum for actually monitoring the queues.
Nokia Phonebook information was accessible from intranet by web browser and also by using SMS message-based querying (for Nokia intenal staff only). To improve accessibility and usability to mobile users, the WAP/WML querying of phonebook was added to complement the HTML Interface.
Because of the modular design of the phonebook, WML was easily accomplished by overriding output methods for original HTML based output. Using any of the WAP enabled phones automatically triggered the WML formatting of the pages. The WAP devices were detected by the incoming HTTP request headers, which indicated the WAP compatibility / capability of the device. To adapt to the requirements of small devices, the results were also windowed smaller and fewer fields were displayed on phonebook result listings. Part of the initial development effort was carried out in WAP / WML emulator environment.
The ease of mobile use for the Nokia phonebook - especially browsing from listings of search results to individual entries - was significantly improved over the previously provided SMS-only querying (for devices that supported WAP/WML).
The hardware assets with their current configuration (such as IP-addresses, CPU, memory and HD info, NIC card...) was maintained by the PC group in an Excel spreadsheet with file locking issues, multiple-copies problem and occasional corruptions. A replacement was needed to allow concurrent editing capability, public visibility and ability to search assets.
The hardware asset information from the Excel sheet was imported to an LDAP database and an application to manage (create,edit,search) the assets was developed in Perl using Mozilla::LDAP Perl APIs. The tool was also later changed to allow generation of NIS host Netgroups maps from the group information associated with hardware information.
This application eliminated inconsistencies in HW maintenance and allowed public visibility to HW asset information and ownerships. The information allowed the displaying of hardware assets information for people in the local San Diego phonebook (with the hardware "owner" field tying the two).
The old way of creating carrier configurations for phones using text editors which required the typing of configuration settings in hexadecimal notation had shown to be very error prone for the marketing department staff. This PRI web tool enabled creating carrier configurations through an intuitive web user interface and downloading the configuration directly from tool. The tool also allowed easy administration of all available settings (per product family) within the same tool.
The PRI Tool was composed of several object classes encapsulating handle specifications, current configuration instances, product phases, output compilers and UI modules. Since the settings model was stored in a (LDAP) database, the settings documentation information was also integrated with the database. Automatically generated documentation was also available through the tool nest to each setting field. The format validation of settings was implemented in both the UI of the tool as well as the output compiler in backend to improve the correctness of settings. The configuration tool could output the config information as either directly phone flashable hexadecimal format or XML format, which was inherited by the next generation of the tool.
By replacing the old error-prone process, the web tool improved the configuration throughput remarkably and elimited the old trial-and-error nature of process. The auto-generated documentation within the tool helped newcomers of the phone configuration staff to understand the settings quickly. This tool has been in exclusive use on all DCT3 generation CDMA phones at the San Diego site since mid-summer 2000 to 2003 when active portofolio of DCT-3 phones was phased out.
The Finite Element Method (FEM) Analysis crew in Mechanics line had formerly established a simulation reporting via intranet in HTML format for easier site-to-site access and visibility. The number of reports was growing so large that there was a need to make the report meta-information searchable. The nature of report documents made the maintanence of meta-information within document files unpractical. Additionally, there was a need to track task durations, completion and other important information on the reports. Also, the local San Diego site prototype shop and mechanical testing group wanted to join the efforts to improve their process tracking.
An LDAP directory was chosen as storage for the report meta-information. The URLs to the report documents were stored in the database and presented on application UI for direct and easy document access. The application was later converted to utilize relational (MySQL) database as storage.
The Mechanics simulation, protoshop and testing tasks database enabled improved visibility to the each of the report types. The task duration tracking allowed for better planning and improved estimation of task throughput for management purposes.
There was a need for a central Organization hierarchy maintenance tool to allow Human resources, IT and project support personnel to maintain site organization hierarchy information. The tool had to be easy-to-use and had to have a web frontend. The organizational information maintained with this tool can be used to output various kinds of representations like org. personnel listings with associated job role information, provide dynamic organizational charts to browse and provide links to organizational web homepages.
Org Editor was developed in Perl with organization information stored in LDAP directory database. The object oriented techiques were used for hiding and maintaining redundant two-way linking between parent and child organizations inherent in LDAP databases. OO Inheritance was used for deriving organization objects from a lower-level based LDAP entry and "appending" organizational characteristics to it.
The central storage of organizational information allows business applications (and web servers) to refer to live, up-to-date org. information for workflow decisions and allows access for both control purposes and dynamic organization charting. The tool has been (and continues to be) in use by Human resource and Intranet solutions for 6 years as an exlusive tool to maintain the organizational structure of the San Diego site.
Mobile phones have internal product performance counters (PPC) for the errors that occur within mobile software and hardware. These counters stored on phones act as diagnostics logs for development and testing of the phone. There needed to be an easy way to upload the stored counter information from the phone to more permanent server-based data-storage using the web-browser built into the phone.
Developed by the San Diego mobile SW, the browser had a integrated capability to return counter information as a binary HTTP/POST upload. To initiate upload, the phone first needed to query a counter set to return. The binary counter information set from the phone was stored centrally into files residing on the web server receiving the upload. The counter set profile configuration was designed and implemented by an HDML (Handheld Device Markup Language from Openwave/phone.com)-based UI Because of the extranet environment, the data storage was kept simple with a ascii file-based solution. The ASCII file was stored in a format that was directly importable to Excel.
The project provided an interactive thin-client solution to configuring sets of PPC counters from multiple phones to be centrally stored on a single server. The central storage allowed later large-scale analysis and reporting.
The NIS map integration project established the storing of traditional UNIX NIS maps in an LDAP directory according to standardized schema supported by UNIX vendors. Tools were also created to replicate the information into traditional NIS systems to support older-generation UNIX platforms still relying on NIS information exclusively.
Schemas for users, UNIX groups, services and RPC information were created according to standard definition in RFC2307. A centralized admin tool to administer information was created using Perl based on LDAP Browser framework. The replicator allowed operating replication from a web-user interface or from the command-line (on scheduled basis from cron service). The replication (from LDAP to NIS) and build of NIS maps can be controlled via command-line interface or web user-interface.
This project enabled easy editing and searching of user accounts and other NIS information, as well as more coherent storage of system information. All modern UNIX workstations could also be gradually hooked on to using LDAP stored NIS information (with increasing support from vendors).
The PRI customer settings configuration Tool also involved creating settings which carried inter-dependencies. Despite the friendly user interface, the settings carrying interdependencies (i.e one setting implied another as mandatory) were prone to error. There was a need to introduce an higher level "abstract" setting classified as "Feature" to bind the lower level settings together.
A new setting type called "Feature" was created within the tool, which allowed bundling / tying a set of more primitive settings together, so that no discrepancies could result in the configuration. To the end user, the feature appeared as another setting with a name. Internally, the tool expanded the feature setting to multiple individual settings.
The likelyhood of errors decreased in PRI Configuration with the tool taking care of consistency within interdependant settings.
The previous Leave calendar system deployed on the SD Center provided a way to enter individual leaves and view a center-wide calendar view of people on-the-leave, but was becoming slow because of an underlying ASCII file-based storage system. A replacement was needed for performance improvement as well as future extensions.
The leave calendar was rewritten using Perl, basing the storage on a MySQL relational database system. The application screens were structured to a set of callback subroutines called on actions. This design would also later allow for easier application extension.
The functionality for leave calendaring system was largely unchanged, but provided vast speed improvements in performance as well as improved base for future extensions.
The partlist for electronic phone components, traditionally generated and checked by hand, is sent to the manufacturing plant to use as a source format for the assembly machinery to take care of individual part logistics within assembly process. A tool to automate the time-consuming and error-prone manual component list generation.
The Mentor / EDMS Partlister was implemented as a Perl web application set of object oriented modules to carry out the logical task. The UI part allows browsing of Product variants in a Mentor design directory tree and choosing the product variant to output components partlist for. After choosing product variant, the parser module structures the component data into a deeply nested tree composition of related objects. Each of the objects were able to output themselves in the specified format, allowing for complete automation of the process.
The partlist generator speeded up the process of hand-editing parts lists from ascii formatted files and removed the high likelyhood of error involved in manual processes.
The disk usage monitor system allows tracking of disk usages on site file servers through a web user interface, as well as triggering automated notifications to users that are close to exceeding their quota. The reporting of usages on the web frontend may be done by the user, disk server or disk volume. The trends are shown as graphs in both graphed and tabular format.
The disk usage monitor consists of a component recording disk usage in scheduled manner and a web based tool reporting the usage by the desired criteria. The tool was created with Perl using graphs for usage trends produced with the GDI graphics library for Perl. Release 2 of the tool was enhanced to speed up collection by querying NetApp fileservers for usages directly. The release 2 of the tool also included enhanced graphing using a PHP4-based graphing toolkit.
The tool increased people's and organizations' awareness of redundant data or data to be archived and as a result, helped reduce consumption of disk space. Top disk users and associated Cost centers - paying for the disk space - are effectively tracked and alerted with the help of tool.
The Actuate Reporting framework allows writing custom plug-ins for authorizing users to access reports. In the lack of usable plugin, the IT had to write a custom plug-in to support authorizing Actuate users to view reports by the group information stored in LDAP directory at San Diego site.
The project involved learning the Actuate plugin API, designing the structure of the loosely coupled reusable API for extracting members for an nested LDAP group tree and implementing involved code modules. After the LDAP traversing API implementation was done, the project involved assisting and mentoring a colleague developer in implementing and testing the the final code module. After the initial development under UNIX environment, the C-language code was directly compiled onto a production environment on a Windows NT system. The code module was implemented as a shared library (*.so in UNIX, *.dll in Windows) and was loaded via configuration file directives into an Actuate runtime reporting framework.
The plugin module allowed re-using the existing organizational groups and access control groups to control access to the reports, as opposed to mastering them redundantly. The elimination of redundant maintenance resulted in savings as well as risk of repositories being out of sync.
With an increasing amount of Relational database applications, there was a need to maintain people and organizational information (originally mastered in LDAP system) up-to-date in a relational database. The reasons included a need to translate name information from the ID values stored in databases and higher performance for RDB searches.
A generic data-pump was created with the ability to map field names from LDAP attribute naming to RDB attribute naming. The tool also allowed writing reusable field specific plugins for manipulating field content during the replication process from LDAP source to RDB target. The pump also detects whether entry (by ID) already exists in target or if it should be created (insert vs. update).
The data pump has been in use for over 4 years, maintaining data sychronization between configured schemas of LDAP and RDB data.
Leave request system is a tool for entering and viewing leave information in calendar format via a web user interface. The previously developed relatively simple Leave Calendar needed to be extended to handle the workflow related to requesting and approving leaves as well as keeping track of leave consumption. The tool needed to calculate current vacation balances dynamically based on annual leave grants. The tool allows calculating balances at payperiod ends by tool are automatically sent to regional head quarters each pay period. In the new form the Leave request tool replaced the old paper-based process for requesting leaves.
The existing leave entries were re-modeled to hold leave status and association to approvers as well as handle the complex lookup of approvers and communicating the approval tasks by e-mail. The tool also allows allows users to synchronize leaves to their (MS Outlook or any VCal compliant calendering SW) calendars. The managers can view a summary of balances for all their sub-ordinates to see the need to notify their sub-ordinates on their under- or over-use of leaves. The calendar view within application can be made to filter entries by organization.
The application and related process was taken into use center-wide and enabled public visibility of peoples leaves, archiving of leave information and advanced searches (reports) on leave calendar data. In the HR the tool produced remarkable time savings compared to old paper based process.
The addition of LDAP traversing to an existing Apache LDAP Authentication module allows recursive gathering of indirect members for higher level group nodes in the authorization phase of access control.
Programmed in C language, the traversing task was implemented as a compact object library and calling it in the appropriate locations within existing LDAP Authentication and authorization module. The algorithm used for traversing was based on controlled recursion. The traversing operation (the attributes to traverse, the depth of traversing as well and debugging verbosity) is configurable through directives entered in the global configuration file of web server.
The addition of traversing allowed the migration to Apache based web infrastructure from existing iPlanet based platform while retaining the existing web access control lists. Intelligent composition of access control lists (ACL) from re-usable sub groups reduces the ACL maintenance effort dramatically.
Access control engine transforms access control lists from a central source to various formats through a web user frontend. The engine allows the access lists to be output in various formats for UNIX Samba, Apache web server, Netscape Web server and UNIX filesystem access rights. Output generation is mainly used for creating web server URL access configurations.
The design of ACL generation is implemented in generic access object which can hold individual users and groups for read and write access types. The object allows arbitrary input source and output target of ACL data. The input and output formats are implemented as adapters for object. The input source adapters implemented were Netscape /iPlanet web server ACL file and LDAP stored access control lists. Application wrapping the access object allows previewing as well as storing the access control lists to their intended destination.
The storage of access control lists in LDAP database (with related automated generation of system specific formats) allows a unified interface to controlling access to multiple systems. The automated creation also ensured correct syntax for all the formats.
Site platform migration included transition from Netscape / iPlanet web server on Sun Solaris OS to Apache webserver running on Linux OS. Both target infrastructure components consisted of well proven Open source software.
The major step in migration was the integration of traversing to LDAP authentication module - allowing to use the existing access control lists - and testing the code changes. A previously developed LDAP traversing library was integrated with existing simple LDAP authentication plugin with no traversing capability. The existing access control lists (about 200 ACLs) had to be converted to format that new web server used (See ACL management project). Apache webserver was compiled from its distributed source tree.
The project allowed transitioning from slowly progressing commercial platform to progressive open source infrastructure with customizability, rapid fixes to security issues and zero licensing cost. The cumbersome iPlanet UI for managing web server access and the tendency of web-server corrupting its ACL file occasionally were eliminated with the help of new Web server architecture and ACL management system (see previous projects). The migration task as a whole was carried out in collaboration with system administration team.
Product Milestone Manager allows tracking the development of Mobile Products with their related features, subfeatures, milestone timeschedules (for products, features and sub-features) as well as customers for product, suppliers for feature / subfeature modules and representatives for development objects on the corporate side. The system also allows management for schedule dependency relationships by triggering relevant alerts in the dependency chain product / features / subfeatures when the schedule for dependencies threathens to delay the schedule of higher level entity. The app allows authoring and editing of all present data objects.
The application was developed with Perl with construction, modification and deletion of each of the development objects transparently handled by singular handlers to avoid writing redundant code to manage the the data schema of 11 RDBMS tables. Advanced object oriented techniques were used to accomplish transparency of operations among process objects.
Milestone manager allows product feature and milestone management, typically stored in fragmented, constantly out-of-date spreadsheets into one consolidated view of program schedule. Proper authorization and access control allows the relevant people to update the scheduled tasks for best possible public visibility.
The change in underlying storage systems used on site made it possible to query the Netapp disk storage servers directly - allowing a more fine grain statistics gathering on disk consumption. The usage could be also tracked on more granular level.
The old diskusage gathering, based on UNIX command line tools (du,ypcat) was converted to use Netapp -supported querying to get disk usage per user, server, volume and subvolume (previously only by user/server). The new release also replaced the graphing done by a fairly primitive Perl-based toolkit with the use of a more hi-level and flavoured PHP4 -based toolkit. In addition the timespan of graphing was extended from 50 days to 1 1/2 years. The allowed quota and actual usage were displayed on the same graph side-by-side.
Project achieved better granularity on disk usage tracking, speed improvements on the gathering phase and better graphing. The database hosts over 5 million recorded diskusage samples with no (noticable) performance degradation as the sample amount grows.
Employee Services needed a lightweight, easy-to-use tool to allow sending Peer recognition messages to colleagues. The system needed to be able to notify the line manager on each submitted message as well as be able to produce reports to line managers on the recognitions to their personnel for each month.
Tool was rapidly developed based on an existing framework with custom plugins written to implement the organizational lookups to notify line managers of the awards. Messaging was embedded as part of the hosting framework as part of the project.
Tool continues to be in use at SD site with about 4000 awards entered in 2 1/2 years time.
The customer needed to be able to delete occasional invalid test result sets recorded from phone testing stations as needed. The results from testing stations recorded into Oracle relational database. An administrative editing console needed to be developed to allow edits and deletes.
System used existing data schema with 3-table hierarchy, stored in Oracle database. Application was developed using an existing framework.
The tool allows making administrative changes through a web-based user interface without needing to master SQL (and consider complex inter-object relationships) for results deletion.
ISO Quality tool was needed to raise awareness of ISO Quality standard in the center. The tool had to be able to evaluate the answers for correctness and store the results in the database.
The Quiz tool was developed in Perl using relational database as storage. The task led to development of a general purpose quiz / survey framework able to handle any multi-choice survey setting with a very short lead time.
ISO Quiz has been in use for several years now. The framework has been used to publish several surveys and quizes with short configuration work.
Cameleon product program needed a tracking DB for application bugs on a very short notice before the release of the product.
A framework created in Perl enabled setting up the application very quickly with a bare configuration, with need for very little testing (No code was written to implement application).
The application URL was communicated to the phone testers, which were immediately able to report bugs on-line. Achieved a consolidated place to report bugs and errors.
The outdated Non-supported Oblix conference room reservation system was to be replaced with a product from another vendor. The new product, Resource Scheduler from MeetingMaker had to be loaded with all the information from past as well as meetings reaching half-a-year to the future. Information on Users, Resources (conference rooms), Equipment in conference rooms and reservations (tying together objects: resource, users and time) had to be all exported and imported to new system.
A set of Perl scripts were prepared to parse the VCalendar-formatted data stored by Oblix system, expand the reoccurring reservations and store the data in an intermediate schema in relational database. From intermediate schema the data was formatted as stored procedure calls used by Resource scheduler to import the data. Design and implementation involved intensive use of deep data-structures (to store reservations, resources, equipment) as well as challenging algorithms (to expand the reoccurring reservations). The Complete Import operation was controlled by a traditional makefile controlling launching of various phases of import/export operation with proper sequencing and dependencies.
Data was successfully transferred to new system and there was no interruption in the availability of conference room booking systems.
Applications have a recurring need to log changes to entries as data is edited using various applications. A simple way had to be created to log the changes at object instance / database entry level.
A Relational DB Schema and Perl module with clear API and configuration interface was developed to allow logging changes related to arbitrary data types (database entries or object instances). The changes are logged by application, object type and object identity. The application itself has a freedom of choice on the format (and granularity) in which the change is logged.
The module allows enabling logging within any DB Application programmed in Perl with minimal changes.
UNIX Installation on various sites at Nokia needed to be tracked to gather a central repository of UNIX variants (Sun Solaris, HP-UX, Linux) installations. The requirement was to be able to later produce reports (not within scope of this project) based on collected data.
A CGI-based gateway script was created to allow netcat (open-source) utility to make a simple HTTP GET request with installation info passed as parameter to log the information onto database. The ability to use netcat utility found in most UNIX variants out-of-the-box enabled keeping the solution as lightweight as possible, with minimum addition to installations scripts and with no need for additional SW components (like database drivers).
There is a central repository for UNIX installations from various sites from where reports can be extracted.
MPART - Mobile Phone Access Request System needed a admin/management tool outside the Phone request application to fix / edit information on the lower levels (not enabled by the request process application).
An existing framework was quickly configured around the existing set of Oracle tables used by the MPART Application. Short configuration process enabled instant editability of the phone request entries.
Admin console was accomplished with no code development efforts.
The in-house Test Scheduler tool and commercial off-the-shelf TeamTrack tool needed to exchange test task scheduling data so that the test reservations made in the in-house tool could enter a complex workflow maintained by the TeamTrack system used by Product test factories at Nokia San Diego site.
The vendor-enabled XML messaging (using SMTP mail messaging with XML content) was used to map the Test Scheduler data to TeamTrack. An Object oriented module with extremely simple interface was written to present the message to be sent across system. Language used for the implementation was Perl. An ID mapping / exchange system enabled the systems to refer back to each others data / entries to poll the status (completion) of scheduled tasks.
Achieved Intercommunication and exchange of data between two systems to be integrated.
A tool was needed by CDMA Variant team to create packages with cell phone media files in it (Often specific to model / type and carrier configuration). Tool had to be web enabled to allow single point of access. The package needed to be immediately downloadable from the tool.
Tool was written in Perl with core objects as a set of Perl objectclasses. An admin user interface was set up to allow adding phones models and their variants and media types to avoid the need to change the tool. Tool was later extended to handle additional information needed by JAVA certificates. Also SIM Lock script generation capability was added later.
All CDMA Phones media packages and SIM Lock scripts are created by Variant team using the DCP tool. The tool takes out the error prone manual creation of index file (listing of files packaged and some other settings) and the manual packaging to ZIP/DCP format. The produced packages are ready to be used as-is without further processing.
Employee Services needed a tool to allow managers (and selected employees) to submit monetary awards to employees. The systems should notify Employee services on the submission of employee awards and allow progress / status on the delivery to be tracked and updated.
Because of similar application pattern the tool was based on the Rockstar awards peer recognition tool data schema with few fields added. A new private application instance with a lot of shared functionality was created to enable notification messaging and progress status management.
The tool enabled easy-to-use wizard-like UI to submitting Employee awards and centralized tracking of the submitted awards.
The project-to-line costcenter mapping information previously maintained in Excel spreadsheet needed to be maintained in a database environment for availability from workflow-based web-applications that deal with charging the cost from a product programe to a organizational line cost center.
The Excel spreadsheet by finance was imported to a table schema with project-to-line cost center mapping and associated cost center approvers and approver specific cost-approval limits (in dollar amount). The tool information was adapted to be managed based on an existing application framework.
Any workflow based application (such as center-wide Travel tool) is able to query (project-to-line) cost center settlement information from a single well-maintained central system. Because of this accountability, the information will be well maintained.
RUIM (Removable User Identity modules) are simple memory cards for storing configurations and personal information (such as contact information) in mobile phones. With slowly increasing use of RUIM cards in CDMA phones the cards used at RD center for personal use and tesing purposes were tracked in Excel spreadsheet. With hundreds of registered cards there was a need to be able to concurrently edit the card registry, search and sort the items more meaningfully.
RUIM Card information was imported to a relational database and an application (based on existing framework) was configured to manage card information.
Relational database based solution allowed concurrent editing for more-than one card administrator and public visibility to information.
The San Diego Web RD Center has a large number of web applications that were hard to track (for quantity and quality) as filesystem only data. Cataloguing them would allow registering the developer and contact people for the application, URL;s where application resides, the data sources the application uses etc.
A registry of web applications was created with an existing framework. The Intranet solution team members registered all their applications the catalog to allow rapid reference to key information of each application.
Visibility to applications improved by registering them in central catalog.
The EDA tools licensed from the vendor involve a server based licensing system that EDA tool users tend to forget about, by letting the tool sit open while not actually using it. This leads to license pools being exhausted and need for additional license purchases. A tool needed to be able to provide visibility to EDA tool license usage and provide notifications to users who have kept the tool open for longer that system configurable time threshold.
A set of perl modules was developed around the set of Flex License manager "lm_" - tools, so that a a nice object oriented layer wraps the command-line tool activity. The modules launch appropriate tools that fetch information from license server and parse the output stream to internal object data. Data can be effectively looked up for detection and rendered for UI. A cronjob polling the license usage will send notifications exceeding the limit. Alternatively the system can be triggered to force the termination of license use from a user.
License usage awareness was increased within users with the notifications they received. It also allows administrators to run reports on the license use very effectively. The license tracker also became part of Nokia global SEAL license awareness program distribution.
The Nokia Global UNIX account management project - Needy - was having its account management tool developed in PHP language. Application saw a rapid requirements growth and a lot more functionality was requested. The application had its functionality in 40 files already and common changes (improvements and fixes) were hard to implement across files with lot of repetitive code.
Webapp framework was a framework written in Perl designed to allow maximum sharing among portions of code by isolating features to handlers to be called back by framework. Framework also allwed extracting shared portions of code to handlers that are called for all screen specific handlers. It also helps in maintaing database connections to various data sources and minimizing the number of files where screen handlers reside. The existing Perl version was ported to 2 PHP object classes and the existing application code was refactored from long linear scripting style to better isolated handlers.
The number of files was drastically reduced (to around 5) and the amount of repeated code was reduced to manageable quantity. Development goes on with a new rapid phase and more error prone way compared to old code management problems with dozens of files.
The previously established project to track UNIX host installation needed to expand the scope of tracking to Installed software packages as well as update patches (service packs).
The CGI recording gateway was extended to support additional data types for patches and packages). Changes were implemented so that further tracked-type additions would be very easy.
The application provided quick-and-easy site-to-site remote visibility to packages and patches installed onto host machines without need for remote logins and command line usage.
In the preparation of Nokia site UNIX teams using LDAP to master UNIX user account and UNIX group information, the LDAP to NIS pump was needed to enable smooth transition from exclusive use of NIS system to exclusive use of LDAP account management. LDAP 2 NIS pump would allow mastering the information in LDAP - directory, yet have the information reflected to NIS system used by UNIX workstations.
The related code modules were quickly refactored to Object oriented modules and a configuration layer was added to enable using the pump tool at any Nokia site. A packaging (based on tar and gzip) with version control and usage documentation was created to make the package quickly and easily deployable on any Nokia site.
The pump is currently in production use in Salo, Tampere (Finland) and Ulm (Germany) sites. Because of clear documentation, no questions on installation or usage have been submitted to developer.
VMPS (VLAN Management Policy Server) tool had been in use at San Diego site for two years, but by fall 2004 the LAN had been split to more segments one being dedicated to the ITP test stations. There was a need to manage VLAN Policy separately for machines in ITP cluster.
Initially the customer suggested copying-and-altering the original application. However the Existing tool was inspected for the sections where a change for a new tool instance would have to take place. The sections and settings that would have to be configurable were extracted out of the original code and externalized to configuration files. On each request the tool extracts its tool ID from the called URL and uses relevant settings accordingly.
The new VMPS tool instance allowed managing ITP LAN as a separate segment and eased up access control issues that would have arised using single tool. The fact that tool was not copied-and-changed to new instance but made configurable allows maintaining and further developing single tool.
A web tool (being developed by a collegue) needed to implement 2 translations on cell phone binary image files to be flashed as phone SW into phone memory. To avoid re-implementation of conversion within webapp (Perl), the existing windows-based command-line utilities were chosen for the task.
Porting the source code of 2 command line utilities (raw2img, img2fiasco) from windows environment required the analysis of source files to see what windows-dependencies the source files carried. The .dsp build file from windows had to be converted to generic makefile supprted by UNIX make utility.
A makefile was written to drive the relatively simple build with GCC compiler from source. Compiler warnings were followed to get rid of windows dependencies. One proprietary windows string function (int stricmp(s1,s2)) was replaced with BSD UNIX library equivalent (int strcasecmp(s1,s2)). To avoid modifying all locations of Windows function call the change was implemented by a preprocessor macro included with another conditional preprocessor directive for UNIX based compilation. The changes made to source were kept to minimum and completely documented into the makefiles of utilities.
Re-implementation of converter utilities was avoided by utilizing the existing once developed and tested existing utilities.
ISD (Intarent Solutions Development) Team strategy involves moving to Java web-based development for the new applications. A plan was to use popular open development tools to have as much suopport in form of existing SW components as possible as well as lot of documentation and examples available.
A Tomcat servlet server was set up on ISD development (Linux) server and minimum configuration changes were applied to fit the environment. Example servlets utilizing Java JDBC (database connectivity) JNDI (Java directory interface) and other fundamental classes were developed to lead the way in development. Also the reflection class was tested to allow creating powerful generic classes for fetching and storing databse entities and avoid rewriting same patterns again. The Ant build environment (equivalent of Make-utility) was installed to allow powerfully managing the tasks involved in building of Java applications (manage dependencies and large file trees, package and deploy applications, notify by email). Example build was configured to demonstrate the power of Ant build environment.
The new Tomcat environment allows creating any applications utilizing servlets and JSP pages. Ant build environment allows automating and simplifying the build process inherent in Java applications.
The leave tool requires importing empoyees annual leave grants to the leave tool database to allow calculating leave balances by the leaves taken. A Perl/DBI based script for the import task existed, but with strategic plans for moving to Java based development this relatively short script was ported to Java/JDBC based utility.
The tool consisting of basic file reading, parsing and database connectivity was rewritten in Java to highlight the differences in Perl and Java file operations, parsing, data structures and database connectivity. Tool operated from command line.
The new Leave grant import allowed comparasion between Perl and Java based development.
The DCP Packaging tool needed to be extended to support security information related to Java certificate files embedded into digital content package (DCP). While the Javacert filtype was already registered with the tool the existing index file format with simple key-value formatting needed to be extended to hold multiple values for a single key for Javacert files.
Because of the more complex format of Java certificate security fields the index file format produced by the DCP Creation tool had to change. The object structure of the tool was refactored to allow containg Java Certificate security information. When any files with type Javacert are embedded into a package the tool user interface produces an itermediate wizard-like screen for filling Java Certificate information before proceeding to package creation.
The ability to enter JavaCert information via DCP tool allowed to continue using single, central tool for DCP package creation.
The Variant team had a need to generate SIM card (Subscriber Identity Module) lock scripts that control subscribers access to certain phone number, for example by excluding certain countries, regions or area codes. The manual process involved lot of low-level chores, such as conversions between little-endian and big-endian byte orders and formatting results in hexadecimal notation into the SIMLOCK script files. There were also lot of repetitive sections in format, making manual creation of file even more error prone task.
The DCP Packaging tool used by variant team was a natural place for SIM Lock generation, because that audience generating SIM lock scripts was the same. Sim lock script generation was developed according to the script format specification from mobile SW group. The generation of script involved low level operations on numeric data, such as bit-masking, bit-shifting and byte-swapping.
The ability to generate Simlock scripts via DCP tool allowed to continue using single, central tool within variant team. Generation of simlock scripts with a tool removed the occasional errors introduced in hand created scripts.
IT has a large number of file and directory resources with ownership assigned to group larger than necessary (large generic groups). Because of the technical nature of the group management task and required administration rights IT ends up adding people to access control groups several times a day.
ACL management tool provides an easy-to-use request interface where access requests can be submitted online. The approval triggers real-time addition of requestor to relevant access control group with not need for IT intervention.
ACL reduced the work of IT staff and improved the cycle times for adding people into relevant directory access control lists by allowing the business owners of directories respond to requests immediately.
With anticipation of Organization Information mastering location moving from LDAP to RDB there was a need to change any legacy scripts writing to LDAP organizational targets. Certain call and intialization patterns in files using LDAP API calls needed to change.
A Perl script was prepared to effectively search API call patterns ("grep") from all Perl (*.pl, *.pm) files on a 4 GB disk area, recording the matching files and location (line offsets) of matching patterns. Approximately 2000 matches were founnd in roughly 500 files. Because teamwork involved with actual script changes all the match-related data was stored in database tables for easier reference and a web tool was created to indicate script change progress.
Creation of automated tool for searching API patterns eliminated manual review of files. Storing information collected by pattern matching scripts to database allolwed easy concurrent access to information.
Web Applications and server components need to authenticate users against common / standard user account repositories. to avoid each of the application implementing their own version of authentication, a common module easily integrated with applications was needed.
Java JNDI (Java Naming and Directory Interface) API shipping with standard Java J2SDK toolkit was chosen as LDAP Directory access and authentication API. An facade API was built around more complex and granular JNDI API to allow authentication with single method call.
Authentication module was easily integrated on application JSP pages and serlvlets and eliminated the need of of per-application implementations.
With a new regional tool being introduced, the SD site Leave tool had to move from "master" mode into "slave" mode, acting effectively as a leave calendar only.
The Active features (create, edit) from the tool were suppressed so that the features enabling / disabling is controllable through configuration. This means that tool can be enabled back with full features without code changes.
The Use of familiar Leave tool continued in view only calendar mode.
With the introduction of new MyTime / Kronos tool the data sychronization between the old and new system had to be arrange in two ways: The New tool had to import existing leave data from old tool and the Old tool, now running in calendar mode had to receive a continuous feed of new data from new system.
A data pump was created to transfer existing data to new MyTime system. Another pump providing the feed from new system to Leave calendar was created as well to keep the calendar up to date.
Pumps enabled integration between the new and old leave / PTO management systems.
Product Factory Focus tool was required to track the product variants produced on various Nokia (and external) production facilities across the globe.The tracking covered Program, Model, Package, Target Carrier and SW Version.
An existing framework was chosen to quickly implement the tool.
Visibility to Factories improved remarkably from the days of fragmented Excel file versions.
Without development server the PHP development on production server enviroment risked the stability of production enviroment in cases where PHP -based script got to exceed the resource use (such as eternal loop).
PHP was compiled as an Apache module against the header files of an existing (old) Apache installation on development server. Module was build as UNIX shared / dynamic object which was installed in Apache modules directory and configured to be loaded dynamically to server process space at start-up.
A safe environment for PHP development was achieved by providing the module onto proven development enviroment.
The New UPC / FOTA (Flashing over the air) Standard allows packaging Mobile SW / Firmware updates (fixes, upgrades) using a standard XML formatted package that is designed to be compatible across Mobile SW vendors. One of the major cell carriers was planning to implement its phone upgrades using the UPC (Update Package container). A convenient intranet tool was needed to allow build team to generate the UPC Package with involved base64 content encoding, calculation of checksum, and security signature generation at a high level of automation.
A web-based tool was created using Perl programming language. The highly hierarechical package structure (allowing encapsulating multiple different update paths for multiple different devices ...) was made configurable using a single dynamic web screen allowing to configure various parts of package. The dynamic screen was achieved using combination of CSS Stylesheets and JavaScript language. The only step to generate a package descriptor from application main screen was to click "Generate" button and then either view the XML or download it.
UPC Content packager allows quick and easy creation of Firmware update packages using Industry standard format. Validations built into tool eliminate the errors from package content. The alternative way of creating packages using text editor or even dedicated XML editor would not make the level of automation possible.
With a few development activities outsourced to an external company an information pump was neede to keep the employee directory information with related organizational information in sync with the proprietary database schemas hosted on Oracle database server and used by the new applications.
A datapump was developed using Java, defining a universal class interface to describe a generic replication operation. A class implementing the interface was written for each entity type to replicate (users, organization, organization memberships) taking to account all the specifics of each replication type. The data access was implemented using Java JNDI - Naming and directory interface - API for accessing the Local LDAP directory and JDBC - Data base Connectivity - API for accessing the targeted relational database (Oracle). The Java Container classes (Hashtables, ArrayLists) were heavily used for indexing and creating a memory-peristent image of the data to replicate. For potential system layout changes, the "most-likely-to-change" parameters like LDAP and RDB/JDBC server connection URLs were externalized to properties file for easy of configuration for admin staff.
The datapump enabled keeping the employee/organizational data master data repository in sync with the schemas of external vendor applications. This eliminates the redundant account creation and maintenance in the databases of these applications.
With intentions to form a merger with new a new company the authentication directory would have to change and more configuration settings would be required to configure plugin to alternative directory schemas. There was also a need to allow plugin to do authentication and authorization against separate directories (instead of single directory).
The plugin module was altered to support a few more configuration keywords that were read from the module configuration file. This included both altering parser to recognize the new keywords and well as holding the new configuration data persistent in the module accessible memory during reporting server runtime. One of the new configuration variables involved using a separate server for authentication phase of access control.
Module change allowed almost unlimited configurability for connecting authentication plugin against arbitrary LDAP directories allowing quick reconfiguration in changing environment.
The FTP File sender tool was missing the logging to follow-up the FTP transfer tasks. The logging was also required for auditing purposes.
The logging was implemented as a logging to simple column-delimited tabular ASCII file that could be easily downloaded from the tool in excel format. The log could also be viewed as HTML formatted table within the tool.
The transfer tasks were properly logged for both management and auditing purposes.
With the new EMC file servers added to the site there was a need to monitor the space consumption on them and record it to the central database used by the web-based diskusage monitor tool running at site.
A script was written in Perl to parse the 3 separate reports (File servers, Quota trees, Diskusages) into a data container that was saved into the existing database storing the disk usage figures for users. Additionally a few new attributes (for EMC "soft" quota, and time "grace" periods) were added to the existing schema. The graphing on the reporting side of the tool was altered to support the new figures like "soft" quotas.
Adding support for the EMC file servers eliminated a lot of end-user questions relating to the file usages. Having all file usage visible allowed users to clean up their unneeded files easier.
The site had adapted a WMWare Virtual hosting cluster to host the new production and development web servers on site. This also means a need to keep the access control information in sync across the redundat servers.
The existing ACL Generator tool was changed to trigger replication of the generated master ACL configration file across all the sibling servers of the cluster (4 altogether) and signal them to re-read the configuration from the newly replicated ACL (Access control list) configuration.
Because of the smart syncronization scheme the ACL information is always in sync on all the redundant servers allowing them to look like one for their accessibility. With frequent changes in ACLs maintaining and adminitering server ACLs individually would create a remarkable admin overhead.
The Nokia Global UNIX Account management system had been created with a team of diverse members using multiple approaches and keeping code in sync with Subversion (SVN) version control. A complete review of system was in order to ensure code quality and optimal performace as well as future maintainability and extensibility of code.
Project team was called together with high level action plan. Code was reviewed, analyzed and an implementation plan was put together. Plan was executed with 3 developers applying the improvement changes in place. In the process Code was radically refactored utilizing SVN extensively (with constant commits and merges). The changes were tested on development environment to ensure correct behaviour.
The performance and clarity of the code were improved remarkably. The partitioning of functional scopes to separate files eased up concurrent code changes. By not introducing new features the stable functional baseline was referenced during testing to ensure high quality of sofware.
There was a need for end users - mostly project managers and administrators - to control access shared network directories. Nertwork directories could be shared by UNIX(/Linux) NFS, Windows file servers, Samba (Windows file service on Linux) or the controlled resource could be also databse instead of a directory. There was a problem of same directory area being shared via UNIX and Windows, with separate configuration source maintaining the group ACL. To keep these in sync and not allow "data leaks" (by mismatches in ACL:s), there was a need for tool to keep this in sync.
A Web tool was created to have directory access entry be owner and well as the ACL:s in various systems that should be kept in sync. The system supports any LDAP directory to contain ACL (E.g. Microsoft AD or Sun Microsystems LDAP directory). On any changes on ACL, the system would sync the associated group ACL:s on all backend systems. Additionally any resources marked to be synced to legacy NIS (Network information system) from UNIX LDAP were synced immediately at any UNIX LDAP group changes.Keeping the access groups of single directory resource in various ACL group management systems allowed increated information security and prevented data leaks (due to more loose access control in one of the systems).
Broadcom Account creation tool automates the creation UNIX accounts, Home Directories Windows accounts with easy and quick to use web portal (for IT admin and helpdesk staff) and set of automation scripts that act upon data entered on web portal. The progression of account creation (for all of its sub-phases) can be monitored on the website.
The web portal stores the entered new hire data into a central (MySQL) database. The automation agent scripts run partially on company / IT "master" site and partially on 40 remote sites, which have their own UNIX NIS and (NetApp) filer Infrastructure. The agent script installed on remote sites runs unmodified with single "site id" parameter, fetching account provisioning data from central database. The scripts are scheduled to run using UNIX corntab scheduling system. During the evolution of application the accounts were created in NIS and later in LDAP. The change was extremely quick because of well thought out modular design.
Account creation automation saved a large amounts of manual effort and error prone data entry that existed in the old fully manual process. The tool was still in use after 11 years of it's inception.
Cisco IP phone setup enables speeding up the setup and provisioning of the Cisco IP phone for an employee via a web portal. With only few employee parameters (Such as names, employee number, phone MAC address, device model, site and building) filled in on a form the Website sets up the phone. The only step remaining task is the delivery of a IP phone device to employees desk and connecting it to network. Web portal also provides visibility to site configurations, available number ranges, etc.
Cisco IP Phone setup uses the Documented Cisco Callmanager provided SOAP/XML API with (~) 5 calls made to server to lookup employee, reserve line (allocate number), register device and connect all this information together. Client side used HTTP Basic authentication for security.
The original process of setting phone via Cisco Call manager web GUI required telephony administrator to go through tedious page navigation and visit ~15 pages to fill in information to setup one phone. The cycle time of providing an IP Phone was reduced from ~15 minutes to ~2 minutes per phone (~85% reduction).
LSF (Load Sharing Facilities) is a product for hosting non-graphical engineering simulations on a large processing cluster often consisting of hundreds or thousands of UNIX hosts. Monitoring the jobs with LSF command line tools with all the associated switches takes a while to get used to. MySLSF Portal helps out by providing an easy browsing interface to LSF Jobs.
Browsing Web portal exposes Queues, Hosts and HostGroups at hi-level and detailed views. Portal also allows browsing LSF jobs underlying UNIX processes to see how jobs are being processed on cluster and to see if memory or CPU are constraining running the jobs on cluster.
Portal provided saved engineering time by not requiring every engineer to take the time to master the details of LSF command line utilities.
MyCitrix is a portal for managing engineers Citrix sessions worldwide on multiple (25) Citrix "farms" with multiple servers on each farms (total 75). Users are conveniently able to list their worldwide sessions on an all the farms and their hosts. Portal allows connecting back to previously established host sessions, disconnecting then and logging off from a session completely. Also forceful "reset" of a session is supported. Portal overcomes the limitation of Citrix provided session management software, where user cannot deterministically return to previous session (sometimes new session is created even if user wished to return to his previous work / applications in a Citrix session).
Portal is deployed as single central application instance deployment. Portals business logic is hosted in a set of Object oriented modules that encapsulate underlying Citrix commands to control sessions. These modules were contributed to CPAN with the permission of company CIO.
Engineers use MyCitrix as a quick-connect tool for their Citrix Engineering work session. MyCitrix has been staying in top 3 of intranet applications almost since its launch (Broadcom staff is 75% engineers).
Easy to use portal for Employee termination related IT asset collection. IT Assets are tracked in Remedy system that provides a relatively clunky GUI. The web portal created for asset collection consist of an easy to use dynamic GUI that wraps the workflow between HR, (terminated) employees managers and IT Helpdesk staff (or alternatively shipping staff for SOHO employees) who ultimately take care of collecting the physical assets.
Portal consists of 10 screens with AJAX enhanced dynamic views (to eliminate redundant pop-ups or page transitions) and a modelled state transition workflow engine to avoid the traditional "rush-to-implement" if/else logic based workflow code that is hard to go back to and modify. Backend database for the used by the system is Remedy and it is accessed with direct ARS Perl API.
Human resources, Managers and IT were able to quickly and efficiently coordinate asset collection with minimal cycle time (at employee termination). The portal provided notifications to parties involved with particular termination on all the task completions.
Bluetooth Group had their legacy testing framework developeed as C++ based framework. Running suites in various combinations was tedious writing these in C++. A scripting language interface was desired.
An easy-to use object orineted (OO) API was designed in Perl to wrap the unnecessary details and complexities of test setup to make it easy and productive to create new test suites. Even if the original the C++ API was written as procedural interface, the Perl API followed an OO model, making the testcode even more readable and terse.
Digital Video Technology (DVT) Synthesis group maintained their tasks in Remedy and needed a tool to search the tickets with very advanced search filters that Remedy out-of-the-box interface could not do.
A web tool with simple and advanced search user interfaces was created for the group. The search application ran in mod_perl runtime making it extremely responsive.
ASR Tool was transferred to Perl App development group for hosting it in Linux LAMP environment (previously Windows /IIS) and for further development.
The base code cleanup and refactoring included 1) turning GUI views to use templating (from traditional print output and heredoc), 2) Moving to central connection pooling, 3) Reducing repetitive redundant code Additionally a shipping admin gui was added to manage shipping admins on 30 Broadcom sites.
Engineering and automation had throughout history developed a large amount of automations and applications with used a lot of Perl CPAN reusable modules. The modules installation and maintenance on per-application basis showed to be a unnecessary burden to many automation and application developers. Also the Company OSS (open source sofware) system often had perl versions and variants (version number, OS platform variant combinations), that had inconsistent modules installed. None of the off-the shelf Perl distributions fully served the semiconductor geared modules in their packaging. There was a need for multi-platform (Windows, Linux, Solaris Sparc, Solaris X86) Perl distribution that had the same consistent set of modules for each OS platform variant.
Active State Perl distribution was taken as a basius for the "enhanced" in-house distribution (utilizing the extensive Active state quality assurance the had put to base distro). Approximately 500 well known software, automation, IT and chip-design geared modules were selected into the in-house "Business automation" distro based on insights of long time perl experts (in team and partially outside). A effective small scale build system was created to automate the installation of CPAN modules (including the C-bindings, Perl XS C code compilations). Distribution was compiled on/to each target platform using the same base process. The distribution was stored in SVN (with *.so binaries for "platform native" modules) and deployed directly from there (As a curious technical detail, the active state script assisted "path relocation" process was applied when deploying the interpreter and modules to their final path location).
Business automation Perl gained populatity with engineering groups and IT teams that wanted to depend on reliable perl environment with consistent modules and use it on multiple platforms transparently.
Secure sign web portal allows engineers easy signing of firmware and boot loaders by uploading images to sign via web browser. The portal enables access to HSM (Hardware security module) that enables signing data with ultimately secure hardware stored keys. The application facilitates an easy workflow where "owners" of the security keys approve signing requests via email notifications. Approval is enabled directly via portal or directly by email (by typing "approve" / "decline" on the first line of email.
Application maintains key information (including owners, approvers, but not private key), workflow model as well as workflow (request) instance data in an (MySQL) database. The complete request and its progression is (slightly redundantly) tracked in Remedy database (notes are injected into work log at all workflow transition events). For approvals Securesign workflow supports multi-person approvals with OR, Parallel AND and Sequential AND logic, which cane be configure per key owners preference (i.e customizable flow per key). Portal communicates to HSM with a secured protocol tunnel (SSH), exchanging the content and the returned signature via tunnel.
Securesign portal provides automated approval flow, audited processing and encapsulated interface to signing process. Parties doing the signing do not need to be involved in the intricate and error prone details of signing process, let alone jot notes or memorize key names, HSM box addresses and in general deal with the low level details of the process. The application is still in use 11 years after it's inception.
With infrastructure change several Broadcom LAMP Perl and PHP applications were to be transferred from their old hosting environment on Sun Solaris, Apache 1.3 and mod_perl 1.0 hosting environment to Linux, Apache 2.2, and mod_perl 2.0 environment. While Linux to Solaris and Apache 1.3 to Apache 2.2 parts were fairly trivial, the migration from mod_perl 1.0 to (practically rewritten) mod_perl 2.0 required deep expertise on the changed Apache mod_perl interfaces (modules and APIs).
Applications were assessed for change needed and extent of modifications required. Applications were modified as needed to adapt to the new environment. Each of the applicatioins were tested in isolated development environment.
All applications migrated were tested and ended up running stable in the new Linux/LAMP/mod_perl hosting environment.
With Broadom IT version control system change from CVS to SVN all LAMP apps needed to be migrated to SVN and re-deployed from SVN to their respective development, quality assurance and production deployment areas. Perl scripting was used to ensure consistent mime-types and line endings across all files of 20+ LAMP applications (among other details of migration). Also the deployment documentation (for the initial checkout) needed to be revised and converted from using CVS commands to using respective SVN commands.
Because of the pioneer role in SVN migration (first to complete), the perl team helped several other IT groups in their SVN migrations.
ADP payroll provider offers customers two variants of their monthy payroll report - a no-cost single flat tabular text file (not usable directly as-is) and the normalized multi-table export/report for $7000/month. The free single file has the same information as the normalized multi-file version, just flattened into single file. The single file has an added problem of having varying column ordering and composition every month (however the columns are named in the file). Because of previous the file is not consumable / importable into relational database with usual off-the-shelf ETL tools. Payroll data was needed for company internal financial reporting purposes.
An advanced Perl parser was created parsing the file fully into intermediate in-memory data structures as the first processing phase. The data structures corresponded to a 7-table relational schema with references holding the relations together. The second processing phase mapped the the in-memory data into relational database using a Perl object relational mapper (ORM) toolkit. During development a MySQL database was used as an ETL transform destination and in production a MSSQL database was used. Only the (Perl DBI) database connection string (stored in configuration file) needed to change for production environment. The destination table SQL schema (compatible with MySQL and MSSQL databases) was modeled as part of project.
Finance was able to import the data for reporting from the data feed supplied for free instead of paying $7K/month for the same data in normalized format.
Broadcom Digital Video Technology (DVT) Group simulates hardware being developed with SW based models developed in C language. These models contain security keys and algorithms only known to customers and they must be run in a controlled manner where the executable containing "secrets" is never seen by the user.
A secure wrapper was developed to allow secure C-Model execution. The C Model binaries are stored on a protected area and are only launchable by Perl+C hybrid where C-wrapper takes care of impersonation (elevating privileges from end users "effective user" identity to privileged accounts identity with dedicated POSIX calls) and Perl looks up the encrypted executable from a registry and decrypts it to run the binary. The decrypted binary was deleted after running it to never allow decrypted binary to be available for exposure. The system was still in use after 10 years of it's inception.
IPX Core services is SOA Service to manage IP Design files repository and versioning and provide a set of service methods to sore, retrieve, compare (detect differences), copy (clone and inherit IP), validate (perform chip design topology checks ), inject uniquifying prefixes and watermarks to design files. The end-user GUI client tier is a completely separate application, but CCS provides some developer and admin geared web GUIs to monitor processing of asynchronous requests.
IPX Core services was designed as JSON-RPC service, where JSON format well facilitates the need to pass arbitrarily complex (deep, tree-like) data sets as parameters to service. An asynchronous processing model was designed and engineered on top of basic request/response based synchronous JSON-RPC processing model - an asynchronous processing is spawned in the background with JSON-RPC response being sent back, while only at the end of processing The IPX CCS is modeled as Object class hierarchy of 15 associated classes which enable access from SW (Software) and HW (hardware) client tiers. Application reuses dozens of Perl/CPAN base functionality modules to implement all its features efficiently via reuse (Examples: DBI, StoredHash, Text::Template, File::*, ...).
Increases engineering efficiency and legal compliance by providing a centralized repository containing all IP in use across company products.
Bluetooth Group had their own small command line (CL) utility BT address. The address allocation tasks were being transferred to marketing staff, not familiar with CL usage, so a graphical GUI version had to be built. Loosely based on the logic of CL utility, the core (allocation) logic of refactored into Perl object oriented (OO) module, which was then shared by both the new CL utility and the easy-to-use GUI version (for marketing staff).
The broadcom IT diskspace reservation tool that was hosted on an ad-hoc RedHat environment, needed to be transferred to the standard Broadcom LAMP hosting environment. Migration tasks included creating a new server config, reviewing an testing external HTML references (for images, stylesheets and javascript) and moving application under version control. Also NFS disk mounts to disk areas that the application was using had to be requested and configured.
With requirements for added security and single universal sign-on (SSO), IT chose "IBM Access manager" backend product for the SSO server. With this top-level change there was a need to integrate this product into IT LAMP Stack authentication flow.
A solution was architected in a way that requires no changes to any of the applications. An apache module was deployed on LAMP servers to replace the old HTTP basic authentication mechanism.
The global IT mandated SSO authentication compliance was implemented transparently with no changes to individual applications.
NIC division CI/CD Build environment suffered from infrastructure / Build OS related consistency errors originating from: 1) Inconsistent OS selection (e.g. Mix of LTS - Long Time support - version, Desktop and Srever editions) 2) Inconsistent OS SW packages on the host, Inconsistent system configrations (e.g. under /etc/), 3) Inconsistent NFS filesystem mount configurations (Mix of /etc/fstab and NIS automounts configured). Many of these had crept in during "quick enablements" where a point-problem on one host had been fixed without distributing the change on all hosts.
New "consistency-guaranteeing" host provisioning was designed around Ansible host-infra automation tool, that allows remote managing any number of hosts by SSH and pythong w/o installing any additional agents (or persistent deamon processes) on remote host end. A set of playbooks was written to 1) Install consistent set of OS (build tool and CI/CD scripts dependency) packages, consistent system configuration files, consisten user account and network and filesystem mount configs (using NIS and Linux automounter), consistent IPMI remote namagement tools and monitoring tool agent installations. Also consistent local devops team accessible user account was setup for situations where network problems (such as stale mounts, file server outage or NIS failure) were encountered. The "idempotency" (ability to run a change-playbook on host multiple times without corrupting the state or pareseability of configurations - Ansible helps twith this a lot) was designed into the playbooks to make it easy to run evolutionary changes changes on old hosts as well.
Build errors originating from inconsistent hosts were brought to (near) zero level making the hardware failures from the older aging machines the remaining problem. New hosts were provisioned using the Ansible playbooks guaranteeing consistency and because of idempotency of the playbooks the evolutionary changes (e.g. new OS packages required by new builds) could be run on the previously provisioned hosts using the same playbooks.
Division CI/CD system was missing an automated way of signing Firmware (or other - e.g. bootload - binaries or hash-checksums of binaries). There was a legacy manual way of manually signing using an HTTP based web portal, but lacking SSL encryption and signing related approvals made the portal potentatially slow and insecure.
A new SSH (transport) based signing "pathway" was created to allow select authorized users (based on groups) to sign any payloads on remote HSM (Hardware Security Module) system. A secure "intermediate" server was isolated from network filesystems, network based account management systems and SSH authorized_keys mechanism ensured only limited number of accounts to be autorized to use the signing pathway.
The new SSH based CI/CD integrated signing system provided 100% unattended signing automation with average signing response time of approx 3s. By Q4 2021 (approx 3y. span), ~228,000 signings had been done using the system.
There are multiple teams in NIC / Connectivity division that could potentially benefit out of Ansible infra automation tool.
A 1-hour beginner training was held in Irvine meeting room with part of audeince following presentation on Webex.
Some of the training participants became long standing Ansible practioners.
CI/CD Build farm had a mix of Ubuntu 14.XX and 16.XX OS Installations with 1) different versions of toolchains 2) different flavors of OS distributions (Desktop vs. Server edition) among other (configration related) inconsistency problems. This caused problems where binaries produced were not only different, but would sometimes change in behavior (trigger bugs) esp. at higher optiization (e.g. gcc -O3) levels.
To Remedy the consistency problem, all the CI/CD host farm machines were installed Ubuntu 18.04 LTS (long time support) server edition. The packages on machines were installed with Ansible intra automation playbooks (discussed earlier). While few initial installs were carried out using Dell iDRAC "Virtual Media" mounts, the dozens of later installs were done using pxelinux PXE boot and installing operating systems directly from network sources (using Lineboot system to manage templated host specific customizations, such as correct static IP address, NIS domain choice, timezone, etc.).
Equipping machines with consistent LTS server version (supported for 5 years) the machines were now consistent. The PXE based install of server distro with SSH enabled took 3.5 minutes to 5 minutes (depending on time of the day) from network bootloader menu to completed installs (wile still running full ubuntu installer).
Yocto builds produced large size nested *.tar.gz "tarballs" (with size around ~ 10GB), which contained both the binary images as well as source code for the build. The "nested" means the tarballs actually contained yet more tarballs inside them. For delivery to customers the "inner" (nested) source code tarballs had to be removed. The old implementation using Archive::Tar and iterating about 15 (avg.) outer tarballs sequentially (in series), consumed a lot of time - forming a bottleneck in the build flow.
A new version of utility was designed and implemented, where outer tarballs were processed in parallel. Archive::Tar was kept as the tar processing engine. A high level reusable parallel processing API was create in the process (DPUT::DataRun).
On a high CPU core-count build machine (> 32 CPU:s) Tar processing speedup corresponded almost linearly to the number of concurrent threads of execution (correlating to number of outer tarballs to process). Tarball extractions that took over an hour, would take ~ 5 minutes.
Global IT rolled out Nagios-based monitoring and wanted to include some of NIC CI/CD machines onto the list of monitored hosts.
Ansible playbook was created to fully automate the installation and configuration of SNMPv3 agent as well as restart the agent to make configuration effective.
Team was hoping that IT would share the logs from SNMP monitoring, but that disappointingly never materialized.
Some changes had been waiting for review in Gerrit for a long time. There was a need to notify the owners of changes about the old stale changes.
A Python script was created to fetch information about old "pending review" changes in Gerrit using it's REST/JSON API. Emails were created on these however so that alerts from many changes for one person were "merged" into single email. Email was also CC:d to change owner's manager. The Python script was scheduled to run via Electric Commander scheduling system.
Users of pending reviews (as well as their managers) were made aware and notified of open changes that needed action.
Because of lacking IT support for filers on a remote Lab-only site, it was necessary to setup a Linux based file server for build artifacts. Artifacts need to be stored locally because of local QA testing system accesses the binaries from storage and benefits from the locality of data. The old filer HW was staring to show it's limits in CPU cores and storage, thus new server HW was needed to replace it.
Ubuntu 18.04 LTS server OS was installed on the new (higher CPU, larger storage) server with the new disks installed. An ansible playbook was written to make provisioning repeatable. Provisioning chores included installing filer related configrations, configuring mounts for Samba and NFS and tweaking configs for filer needed customization like max number of file descriptors (has to be large on filer).
New server provided more performant hosting for build artifacts and the new OS had longer support timespan. The ansible filer setup helps re-produce setup.
The IT Filer migration from Netapp to PureStorage filers for the Artifact storage rose a concern for build flow slow-down. Even if IT initiated transition was unavoidable, Devops team needed to test the speed to see how significanlty it affected the holistic build cycle (including copying artifacts to filers).
Tests were carried out between CI/CD Build hosts and filers using various methods protocols: rsync (fs-to-fs), cp and git (fs-based clone). time utility was used to time the operations.
Filer speeds reduced approx. 25-35% (avg.), which was disappointing for new technology, but not seen as fatal to overall build flow (as there are other avenues to optimize performance).
JIRA tickets needed to be modified based on (project, defect type, owner ...) criteria so that several hundred or thousands of tickets would get modified. Doing this manually would be unfeasible. JIRA provides a very good Web API for selecting and filtering tickets, but no advanced granular updates can be done via it. This forced to use the JIRA supported extension language "groovy" to do the updates
A hybrid model was used - A Python script was used to query JIRA (using "JIRA" module) and the results were output to file in Groovy data structure format (suitable for update). Because JIRA frontend allows to run only a single monolithic groovy file, the (large) data structure was embedded into update script to be iterated for groovy (com.atlassian.jira.issue.* API) updates.
Using JIRA HTTP query + Groovy update script way saved a massive amount of manual, error prone work and tickets got correctly updated. Variants of this import were later repeated (few times) with high reuse of the old python and Groovy code (not presented as separate items).
Among CI/CD build hosts remote site (Richmond) has an exception on the the use of automounter maps. Instead of maps coming from shared NIS, there was a local /etc/ automount config that had to be maintained on individual hosts (luckily).
The automounter info was migrated to local NIS to have all CI/CD hosts get their mount info from NIS. The now simplified host provisioning was run again to remove custom configs.
The CI/CD host scrips were simplified and consistey was brought to CI/CD host mount map management.
The decision to migrate from Icinga2 to Zabbix monitoring would typically cause a lot of manual work and some errors due to manual approach, when installing agents and setting up their configs.
Ansible automation was created to 1) Install Zabbix monitoring agents, 2) setup their (host specific) configurations for Zabbix "Active client" monitoring.
Ansible automation eliminated the error prone manual machine-to-machine configuration process and enabled to run quick provisioning for Zabbix on future beare metal, Intranet VM or cloud VM machines.
The build VM guests are usually constrained on the host CPU (Hz and #cores) and memory (GB:s of RAM) allocation. It was decided to try how builds would speed up if run on Bare metal environment.
A set of builds that were easy migrate (without intricate OS distribution tied dependencies) from CentOs VM to Ubuntu were tried on Ubuntu and the build resource type configuration was updated accordingly.
The builds were noticed to speed up 5x-7x (avg.) depending on the build. Big part of this should be accounted on the large number of CPU cores and size of RAM on bare metal machines as well as the efficiency bare metal execution.
With coverity (command-line - CL) clients being updated constantly, it was time to ensure that the server is new enough to accept defect submissions from newer CL clients.
Upgrade, including importing dump of data from old server was "practiced" on a loaner server from a neighboring division. The practice version used was the same as the planned version. After successful test install the same set of install steps was carried out on the old server. A maintenance window was declared and migration was carried out during one saturday night outage. Update was carried out on production server by still keeping the old version. (by never allowing the old and new server run concurrently).
Upgrading to newer version extended the support for range of newer analysis client versions.
DPDK (Data Plane Development Kit) relies on Ninja/Meson build tools an additionally the Meson needed to be newer version what Ubuntu 18.04 provided (requirement by DPDK).
Ninja was installed from a disto package and new meson was installed by pip, but the original ubuntu distro meson launch wrapper was found to be fully usable.
By installing Ninja and Meson build tool dependencies, the DPDK builds were enabled on CI/CD system.
With increasing CI/CD Build artifacts storage needs, and IT filers filling up it was decided that company Box unlimited storage plan was to be utilized for artifacts storage. Neigboring division shared their experience and best practices on rclone cloud storage mount tool (including aspects of replication concurrency, need for multiple accounts, etc.).
Rclone "remote" configurations were setup with appropriate access tokens to allow artifacts to be copied to Box cloud *after* the build, as copy during build would have been very unpredictable and taken a long time.
Box Cloud provided a storage expansion to IT filers that were short of space.
Some builds on CI/CD Build farm had run into dependency requirements that had started to interfere with other builds causing conflicts with header file versions, runtime library versions, etc (with especially autotools scripts finding / detecting the wrong version, etc.). It was known that isolating builds into docker could solve this.
Docker builds got enabled by a (Perl) module named DPUT::DockerRunner that drives the docker run (in this case build) by local configuration. This module was integrated into CI/CD flow so that if build configuration has a setting for "dockerimg" (Docker image URL), the build is automatically run in a docker image container that the Docker URL expresses.
By using Docker for CI/CD any builds and their dependencies could now be isolated into a image container that has it's custom dependencies and keep it from interfering with any of the (bare metal) host utilities, headers, libraries, etc.
CI/CD Testing environment has radically different and independent requirements to CI/CD Build environment (e.g. compilers, linkers and packaging tools are typically not used at all). Sometimes multiple versions of test language interpreter (such as python) available on test host can lead to picking up wrong version of interpreter causing errors during test run.
To minimize tool version intereference, avoid wrong versions of test tools and to limit tools used (to consistent set) during the test phase, a docker image with limited set of test geared tool (Pytest, pytest JSON generator, googletest framework) was created.
The failure rate of test runs due to wrong tooling went down with tests run in the docker isolated environment.
The XUnit test runs (by pytest, googletest) in CI/CD environment did not have any user readable output from testing on the final deliverables area where developers go to inspect the outcome of build and related testing.
By Using the right command-line options for XUnit output on pytest and google test runs and by agreeing on the dir path conventions of output (with development teams), a visualized (HTML+ChartJS) output produced by CI/CD build scripts was developed. Additionally the passed and failed test counts were recorded to (MySQL build metrics) database to allow creating trend chart for successive runs.
Having a web browser viewable HTML plus graphs output (including) easily navigable from CI/CD system helped developers quickly draw conclusion from the "goodness" of the build.
As a "Part 2" of the VMWare to Bare metal build migration, the builds that had more tightly coupled ties to Centos Distribution were to be tried on a Centos-based Docker image.
A Centos docker image was created to match the Centos VM guest image package dependencies. However it was noticed that total number of OS packages from original VM could be further reduced about 50% without affecting the builds (meaning the other 50% was not used/needed).
The builds on bare metal docker ran faster than in VM:s and did not anymore need commercial, licensed VM environments to run.
Most of SW developer time is spent implementing and testing sw requirements in a pre-commit setting. The builds may happen on developer workstation or a generic UNIX/Linux server with compilers and toolchains of different kind or different version than the actual CI/CD environment that is the "golden reference". In developer environment some tools may also be missing or there is an excess of tools compared to CI/CD env. This can cause lot of troubleshooting to see "what is different" about. If developer builds could be run in environment identical to final build environment, this unpredictability would go away.
A build runner tool ("wrapper") was created to run builds in docker containers on the CI/CD build farm. Only requirement about source code location was it had to reside on a shared network drive, which CI/CD system also had access to.
With developer builds enabled directly on actual CI/CD environment, the unpredictability factors related to toolchains, toolchain versions, binaries produced (etc.) and developers can rely on the build output corresponding to CI/CD pre-merge validation builds.
With CI/CD build machines being installed by PXE, most Linux distribution (e.g. Ubuntu, Centos) require that the IP address given by DHCP during PXE boot process is the same as the post-install static IP address. Failing this requirement, the network interfaces will be setup wrong (in Ubuntu Netplan, Centos sysconfig/network-scripts) and have to be adjusted afterwards. To have correct IP address from DHCP, the DHCP server has to have the associating between NIC MAC address and the IP address to give out (this is a universal optional per-host setting on DHCP servers). In this case the Infoblox IPAM (DHCP and DNS address management system) was to be complemented with MAC-to-IP address info.
A system was created to fetch Infoblox MAC-to-IP info via the REST/JSON API, that Infoblox provides. For missing or wrongly configured MAC addresses, the correct MAC address was updated via Infoblox REST/JSON API.
With Infoblox DHCP giving correct addresses during DHCP based PXE boot process, the need to adjust IP address and hostname related information after install was eliminated. This saved time and increased the automation level of the PXE install system.
Legacy Build Virtual machines were originally setup to use local accounts and locally configured (/etc/fstab) network drive mounts. This caused manual work at (IT mandated and other) account password changes and interruptions in service after network drive file server changes (e.g. file server name or IP address changes) that were sometime uncommunicated.
NIS and Linux automounter (autofs) was taken into use to eliminate problems with local accounts and network drive mount maps. The earlier created NIS + autofs SW installation and configuration playbooks were used for setting up both nis and autofs software and configuration (for both).
The time consuming manual password changes on large host set became much easier with change in only one system (NIS). The build breakages due to changes in filer (filer disappearing due to change in address or name) were eliminated. Also SSH keys became much easier to manage with NIS and network homedirectories.
With IT filers filling up and Box internet cloud storage being slow, there was a need to find a storage that has flexible size adjustment capability and good performance. Artifactory was chosen for the purpose.
Artifactory allows its storage areas to be accessed with WebDAV (HTTP/XML based) protocol, that was initially designed for document storage. Linux has a package named davfs2, that enables a (userspace) mount to any WebDAV server based on configuration. The davfs2 was installed on the Build CI/CD central dispatcher machine (responsible for transferring build artifacts to Artifactory WebDAV storage) and configuration was setup so that build user service account copies the files to Artifactory (Thus the Build user will own the files in Artifactory). A Google document was written to describe how to access artifacts from Artifactory on various operating systems (Windows, Linux, MacOS) using their easiest/preferred method for accessing WebDAV. Also the ways to submit and retrieve artifacts from artifactory by Artifactory HTTP/REST API:s were discovered (Using this method files are sent and fetched as *tar.gz tarballs).
The storage problems (performance, difficult expandability) with IT filers and Box were circumnvented by providing Artifactory WebDAV based storage as alternative.
With Artifactory servers and storage areas being shuffled to provide dedicated servers to each group, the docker images needed to migrated from one server to another.
The docker migration was carried out by pushing all the (~15) docker build images to new "migrate to" server. Strict cross validation of the Docker SHA checksums ws performed to verify that correct versions of images were pushed to destination server. Because of the changed servers and "virtual" paths under server, the migration also involved changing the CI/CD system "Docker catalog" to contain the new image URL:s.
Migration allowed to continued serice of Docker images from new internal Docker registry.
With Product customers using Linux on CPU Architectures other than X86 (on which all typical server grade runs), there is a need to compile driver and other product SW binaries (such as NIC management utilities) into these - ARM and PPC - architecture targets in the CI/CD environment.
Ubuntu (Debian) supports out-of-box cross-compiler packages that allow compiling (e.g.) ARM and PPC binaries on Intel X86 OS. Two separate images were created for ARM and PPC cross compiles and the standard basic build toolchain was installed on these images.
Having Docker cross-compile images that run on X86 servers eliminates having dedicated ARM or PPC hosts for the compiles. Additional benefit is that X86 CPU architecture tends to be the standard and most performant commodity hardware used on the servers - thus not need for custom HW purchases.
The CI/CD flow contain a lot of over-the-network execution with commands like git, rsync, ssh, curl and lot of commands that are not inherently "network commands" like cp (copy) become such when they operate on files on a network drive. With short transient everyday network outages (or even failed DNS or NIS lookup, when either service had a high load and slow response) a retry of operation helps recover the onterwise failing CI/CD Job (which is usually fully dependent on the over-the-network commands).
The run() shell execution wrapper was complemented with a netrun() counterpart that contains a retry logic by Perl DPUT::Retrier module.
The new automatically retrying shell execxution wrapper allows CI/CD jobs to recover form short transient network outages withourt failing the current CI/CD (build, test, signing) job.
Global IT intiated a requirement, where division CI/CD build machines needed to be monitored with (commercial) Nimbus monitoring system. Nimbus is available for Linux operating systems as binary packages (*.deb, *.rpm).
An ansible playbook was created to run installation steps and configure Nimbus into use and start it up on the CI/CD machines. Webex meetings were held with IT to verify monitoring function. One package version update was applied to fix the flaws in the first version installed. Also it was checked that the CPU and memory usage of Nimbus would not be consuming (CPU, memory, I/O) resources from Build jobs (and usage was found to be reasonable).
Global IT Nimbus monitoring requirement was fulfilled by installing Nimbus on the build machines.
CI/CD Build areas are normally purged (completely removed) after build completes and a selected subset of build products (typically runtime binaries) have been copied to deliverables area. However developers (or devops engineers) often need to analyze or troubleshoot the build by checking the whole build area including source code, Makefiles (or other files driving the build), Coverity intermediate directory (to e.g. send/upload it to Synopsys for analysis), etc.
A feature was developed where build could be optionally archived under a distinct archive directory, which is easy clean up with scheduled cleanup jobs (based on age of dir archive). Solution was tested on both Windows and Linux. Solution was utilized immediately after deploying it to send a Coverity intermediate directory to Sysnopsys to analyze a possible flaw in coverity product.
Archiving build directory to a separate archive directory helps analyzing a complete set of build sources and articfacts when doing in-depth troubleshooting.
Management wanted to see the quantities of defects in each Coverity stream to allow them figure out a strategy to fix the defects.
A Coverity Streams based dynamic defect graph (bar chart) was created to show current of defect counts (per build target). Chart is generated based on live Coverity (REST/JSON) web service data.
The end-of-month snapshot is sent to management to have it embedded into monthly report slides.
With division strategy set to moving build infrastructure from company data centers to cloud, a set of changes were required to adapt to GCP environment. Because The company IT would not allow mounting network drives from file servers in company data centers, the data had to be replicated and maintained in google cloud storage.
The NIS system was found to be working as-is, but because of mounting limitation, A set of playbook was created to 1) Handle auto-mounting of (initially simple) GCP storage volumes and 2) to create limited number necessary accounts (Build user, Git user, Devops team users) on the build infra. All the OS SW package (build dependencies) provisioning and Linux (Ubuntu) OS system settings changes by existing Ansible playbooks was found to be working as-is (be fully re-usable), so there was not work there.
With a relatively small additional automation the Google CGP environment automation could be brought to same level as previous "bare-metal" build machine automation.
CI/CD System Coverity Server installation had a sizeable backlog of defects for some SW projects / components and there was an initiative to reduce the size of backlog by making the defects be auto-assigned to owners of components (that further assign the defects to individuals that actually fix them).
Coverity Connect has a feature called Component Maps / Components, in which components can be associated to file and directory paths where defected files reside (directory path of file in which defect is located is known to coverity). The best source for Components classification was discovered to be in an "actively-in-use" CI/CD system JSON configuration file, where also component directory paths were configured. Owners were discovered with combination of looking at misc. Google spreadsheets containing information on the topic and by interviewing people who were familiar with the SW dev organization. A script utility was written to transform the data from CI/CD JSON config into coverity importable format, that could be uploaded in coverity GUI. The tuples of component name, owner and dir. paths were imported into coverity and coverity was able auto-assign defects to respective owners.
Coverity built-in feature allowed assigning large quantity of defects to correct owners.
Using...:
Skills and experience to:
Integrate Broadcom IP Phone provisioning process with Cisco Call Manager (CCM) system via SOAP web services. Provides a quick and ease web UI to setup the phone.
Automate creation of new hire home directories by adding Netapp volume mount allocation records to Netapp config text files in Netapp config format.
Integrate with commercial UNIX Citrix service by reporting sessions and allowing easy reconnect to Citrix work sessions.
Parse and interpret variable column width Corporate Payroll dump and transform single tabular schema into 5-table normalized (relational) format.
Read, transform and modify information in several Remedy entity schemas in Broadcom Remedy system via programmatic API (Including multiple Helpdesk schemas, IT Issue tracking system, IT Hardware assets registry, Employee schema).
Access HW based commercial encryption system remotely to sign firmware images. The system additionally integrates with Remedy helpdesk system to coordinate the workflow.
Complement application specific data with ETL processed Employee, Organization hierarchy (business units, cost centers), product information and company site location data.
Provide an asynchronous JSON-RPC Web service to store, retrieve and manipulate (IP watermarking and cell prefixing) IP file collections stored as IP projects.
Integrate a Company-wide web based search front-end with a PH/CSO directory / database containing Employee Phonebook information.
Integrate corporate phonebook with mobile devices by allowing Corporate phonebook to be queried by SMS messages from any SMS capable handset.
Integrate (miscellaneous) web applications to retrieve, view and modify information from (Netscape) LDAP directory using programming APIs (Java JNDI, Perl, PHP). The widely varying data being accessed included employee and organization data (typical LDAP use cases), HW asset data, Phone product configuration data, access control autorization lists (ACLs).
Integrate LDAP stored ACLs to be transformed into Apache ACL format (allowing easy edits by web page owners via a web front-end).
Integrate Cell phone originated performance statistics ("counters") to be sent over HTTP in binary format and be stored into normalized relational DB.
Generate Cell phone component BOM from Mentor Vendor file and EDMS (electronic design management system) database dump as composition of needed information from the two master systems.
Handle two-way ETL replication between LDAP and relational database with a single solution basing replication instances on data mapping configurations.
Integrate Actual reporting framework to use LDAP for access control (authentication and authorization) using vendor provided integration api (C API).
Migrate and transfer conference room scheduling information from one vendor product (Oblix) to another (MeetingMaker), converting FS based text files to RDB form (vendor supported SP calls).
Integrate Cisco VMPS (VLAN Management Policy Server) ACLs to be managed with an web based tool that stores ACL:s in RDB. Tool sends ACL in documented Cisco VMPS format directly to the device over the network (using TFTP).
Migrate data from retired in-house leave tool to a commercial Kronos leave tracking system. Transform in-house tool single leave time range to daily chunks supported by the new tool.
Migrate in-house developer diskspace tracker system from using NetApp filer infrastructure to using EMC filers and associated query (CLI) tools.
Take over the maintenance and enhancements of central IT web application monitoring system. Apply large refactoring for more maintainable (and modular) code structure and add support to monitor databases (directly using Perl DB agnostic DBI/DBD API/driver connectivity). Help customers create new monitoring tests for their applications.
Take over maintenance of ASR applying requested enhancements and fixes. Change/Port SQL queries affected by database driver change from "Microsoft MSSQL driver" to "FreeTDS" open source driver (for Sybase and MSSQL). Setup Apache configurations for running previously Microsoft IIS (Internet server) server application on Apache LAMP / mod_perl environment.
Take over maintenance and enhancements of IT MyRemedy Search application. Transfer / Port application from Solaris/LAMP/mod_perl 1.0 environment to a Linux/LAMP/mod_perl 2.0 hosting environment. Implement several enhancements and bug fixes as well as overhaul the GUI for more modern looks (to match other LAMP applications).
Change mod_perl specific function calls and "mechanisms" (e.g. connection management and connection pooling) to mod_perl 2.0 API supported calls. Convert application specific Apache web-server configurations to be mod_perl 2.0 compliant.
Take over maintenance of Broadcom Diskpace Request tool with an offshore team of developers. Change Webdisk to support EMC Isilon filers (recently standardized at Broadcom) in addition to legacy Netapp filers, accounting for the differences of the two (volume naming, etc). Plan and design significant modularizations and refactorings on the original design of the application and drive team to implement them.
Take over maitenance of IT Generic reporting tool that allows reports to be generated by by SQL (with optional Stored procedures). Add features, fix problems, fix and extend existing reports (SQL queries, Stored procedure calls, or Stored procedures themselves).
Accept a handover of ALC data acquisition agent written in (Google) Dart language and convert it to company core-product codebase language (JS run in Node.js). Enhance granularity of data acquisition (e.g. wider variety of quantities) and integrate tool to core product.
Maintain and enhance NIC division CI/CD build system. Enhancements on mutiple fronts: Allow running builds in Docker, Log (fail/pass) results of XUnit test runs (from Python pytest, googletest (for C,C++), any XUnit outputting system) to database and graph them, Improve coverity results accuracy (by analyzing all code of build as opposed to only files changed), Allow developers trigger large-stack SW builds from local workstation etc.
Web Serices that I have created tapped into as part of automation creation:
Open source reusable library module, framework and application contributions at various open source repositories and language module repos: