Showing posts with label software engineering. Show all posts
Showing posts with label software engineering. Show all posts

Wednesday, December 23, 2009

Software Architecture Recovery as a Tool to Introduce Reuse in Companies

According to Ducasse and Pollet, software architecture reconstruction is a reverse engineering approach that aims at reconstructing viable architectural views of a software application [1]. The process of software architecture recovery (SAR) may be utilized in many ways, namely redocumentatation, understanding of already existing systems, evolution, conformance between the conceptual and concrete architectures, and some other applications.

In general, the SAR approaches can be classified as Bottom-Up, Top-Down and Hybrid. In the Bottom-Up approach, starts with the low level knowledge, such as source-code, and progressively tries to reach the higher level understanding using reverse engineering. The Top-Down approach is the opposite, as it begins with high-level concepts, such as architectural views and requirements, and tries to build the architecture through hypotheses that are confirmed by the source-code. The Hybrid approach merges the last two approaches, abstracting the low-level knowledge and refining the high-level knowledge ratifying correspondences between both the conceptual and concrete architectures.

In the SAR approaches we can have inputs of architectural nature, like viewpoints and styles, and non-architectural inputs, like souce-code, human knowledge, etc. As output we can have, for example, visual software views, the architectural conformance and the architecture analysis.

Some approaches that aim at the investigation of reuse and software product line (SPL) migration have been identified: ARES[2], ARMIN[3][4], MAP[5] and PULSE[6]. As a goal for these approaches we can highlight the identification of commonalities, variabilities, components that can be extracted of pre-existing systems that, for example, might be turned into services.

Now, our studies focus on identifying if SAR can be used as a tool to support the introduction of reuse in organizations. Our main challenges are:
– What approaches are more complete/ to introduce Reuse?
– What approach can we use?
– Are the companies producing these artifacts?
– Can these processes be agile?
– How much of this processes can be automated?
– How to systematize this approach?

[1] D. Stéphane, P. Damien, "Software Architecture Reconstruction: A Process-Oriented Taxonomy", IEEE Transactions on Software Engineering., Vol 35, No. 4, 2009.

[2] W. Eixelsberger, M. Ogris, H. Gall, and B. Bellay, “Software Architecture Recovery of a Program Family,” Proc. Int’l Conf. Software Eng., pp. 508-511, 1998.

[3] R. Kazman, L. O’Brien, and C. Verhoef, “Architecture Reconstruction Guidelines,” technical report, third ed., Carnegie Mellon Univ., SEI, 2003.

[4] L. O’Brien, D. Smith, and G. Lewis, “Supporting Migration to Services Using Software Architecture Reconstruction,” Proc. Int’l Workshop Software Technology and Eng. Practice, pp. 81-91.

[5] C. Stoermer and L. O’Brien, “Map—Mining Architectures for Product Line Evaluations,” Proc. Working IEEE/IFIP Conf. Software Architecture, pp. 35-41, 2001.

[6] J. Knodel, D. Muthig, M. Naab, and M. Lindvall, “Static Evaluation of Software Architectures,” Proc. Conf. Software Maintenance and Reeng., pp. 279-294, 2006.

Regression Test Selection in SPL

According to Harrold One factor contributing to high cost spent on the maintenance phase it’s a time required to reanalyze and retest the software after it has been changed.


Some regression techniques have been proposed:



One important technique is Regression Test Selection that consist Choose a subset of tests from the old test set, and uses this subset to test the modified program [2];

Some criteria are used for apply test selection [3]:

• Test suite reduction;
• Test execution time;
• Test selection time;
• Total time;

Find and explores efficient test selection criteria for spl can help to reduce more cost involved in spl testing context. The two main key: variability and commonalities must be explored for support the criteria, together traceability between variability and test cases or commonalities and test cases. With goal to support the definition of Test selection criteria for SPL;

In this moment we study e test a mix of possibilities between techniques and criteria with focus in application in SPL testing context… in another day we going to write more about this subject;

[1]Harrold, MJ, and ML Souffa. 1988. "An incremental approach to unit testing during maintenance." Software Maintenance, 1988;
[2]Rothermel, G. and Harrold, M.J. 1993. A safe, efficient algorithm for regression test selection. Software Maintenance ,1993. CSM-93, Proceedings., Conference on. 358-367.
[3]Engstron, Emelie, Per Runeson, and Mats Skoglund Pii. 2009. "A Systematic Review on Regression Test Selection Techniques." English (July).
[4] Gaurav Duggal, Mrs. Bharti Suri.2008.” UNDERSTANDING REGRESSION TESTING TECHNIQUES”;

Wednesday, September 2, 2009

35th Euromicro Conference on Software Engineering and Advanced Applications


On last week, between 27-29 August, it was run the 12th Euromicro Conference on Digital System Design (DSD) and the 35th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2009.

Both conferences took place at the Cultural and Conference Center, in the University of Patras. The event put togther research from various places of the world. All of them interested in discussing new ideas, such work in progress, and concluded work. The RiSE group was represented by Yguaratã Cerqueira Cavalcanti, in the SEAA 2009 sessions, where he presented three works from the group, as follows:

1 - Martins, A. C; Garcia, V. C.; Almeida, E. S.; Meira, S. R. L. Suggesting Software Components for Reuse in Search Engines Using Discovered Knowledge Techniques, 35th IEEE EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), Service and Component Based Software Engineering (SCBSE) Track, Patras, Greece, 2009.

2 - Neiva, D. F. S; Almeida, E. S.; Meira, S. R. L. An Experimental Study on Requirements Engineering for Software Product Lines, 35th IEEE EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), Service and Component Based Software Engineering (SCBSE) Track, Short Paper, Patras, Greece, 2009.

3 - Silva, F. R. C; Almeida, E. S.; Meira, S. R. L. A Component Testing Approach Supported by a CASE Tool, 35th IEEE EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), Service and Component Based Software Engineering (SCBSE) Track, Short Paper, Patras, Greece, 2009.

The paper "A Component Testing Approach Supported by a CASE Tool" was presented in the SCBSE: Component-based Systems Correctness and Test session. In conjunction with this work, several other articles were presented , showing really interesting approaches.

The paper "Suggesting Software Components for Reuse in Search Engines Using Discovered Knowledge Techniques" was presented in the session SCBSE: Experiences and Applications. And th paper "An Experimental Study on Requirements Engineering for Software Product Lines" was showed in the session SPPI: Empirical Approaches.

All the work presented were very interesting. People showed a lot of new ideas to solve the most well know problems regarding SCBSE, and the importance of the empirical approaches session should be emphasized, since there is a lack of well made empirical validation in most of CS work.

Oh, we had also a very amazing gala dinner organized by the Euromicro committee, in front of a very beautiful beach. There we could taste really nice Greek food, and it was also possible to see some Greek dance and to listen Greek music. Really nice!!!

The next Euromicro will take place on Lilly, France. I hope to see you there.

Thursday, May 28, 2009

31st International Conference on Software Engineering (ICSE) - Conference Report

Last week, I had the chance to participate in the 31st International Conference on Software Engineering (ICSE), in Vancouver, Canada. The conference had a great program composed of research papers, demonstrations, a track related to software engineering in practice (SEIP), a track also about new ideas and emerging results (NIER) and several parallel events such as the 6th International Working Conference on Mining Software Repositories (MSR), the 17th IEEE International Conference on Program Comprehension (ICPC), and the International Conference on Software Process (ICSP).

In these co-located events it is very good to see the growing of the MSR, it is incredible how it is getting attention from the community. On Monday, I spent the day hanging out the area in the morning and having some discussions with others participants. On Tuesday, it was performed the Software Requirements and Design: A Tribute to Michael Jackson, a full day workshop about his contributions in the field. The workshop was very well conducted by Pamela Zave and Bashar Nuseibeh. I believe that this kind of event it is very important to celebrate outstanding researchers working with software engineering. I had the chance to participate in previous ICSEs including the tribute to Barry Boehm. A book with his work similar to Barry Boehm, David Parnas and Vic Basili will be released soon.

On Wednesday, the conference started. The first keynote was Steve McConnell and his talk about: “10 Most Powerful Ideas in Software Engineering”. His presentation was interesting, especially, when he pointed out some ideas with a gauge showing the silly state of each one. His final list was:

1. Software Development Work is Performed by Human Beings.
2. Incrementalism.
3. Iteration.
4. Cost to Fix A Defect Increases Over Time.
5. Important Kernel of Truth in the Waterfall Model.
6. Software Estimation Can be Improved Over Time.
7. The Most Powerful Form of Reuse is Full Reuse.
8. Risk Management Provides Critical Insights into Many Core Software Development Issues. 9. Different Kinds of Software Call for Different Kinds of Software Development.
10. SWEBOK.


After Steve’s talk started the research papers and I had to run and switch among different rooms and sessions. In this day, I decided to see the following papers:
  • Tesseract: Interactive Visual Exploration of Socio-Technical Relationships in Software Development; and
  • Succession: Measuring Transfer of Code and Developer Productivity.

Next, I participated in NIER session with interesting new ideas and the SCORE competition by student teams. In this competition, Brazil was there with a team from UFPE, congrats, guys and Prof. Jaelson Castro their coach! In the end of the day, we had a small dinner with the conference members. It was good to talk a little bit more and meet others students and professors from Brazilian universities.

On Thursday, Carlo Ghezzi was the second keynote speaker with the theme: “Reflections on Forty-Plus Years of Software Engineering Researched Observed Through ICSE: An Insider’s View”. His presentation was very good with several data, charts, discussing what we produced, how to measure it, lessons learned and how to improve our current scenario. It was awesome!!

After that, I started to switch again among several presentations and I ended up with the following list:
  • Reasoning About Edits to Feature Models
  • How We Refactor, and How We Know It (Winner of ACM SIGSOFT Distinguished Papers Award)
  • The Secret Life of Bugs: Going Past the Errors and Omissions in Software Repositories
  • Discovering and Representing Systematic Code Changes.
Still in this day, we had discussion about Software Engineering for the Planet and Reflecting on Development Processes in the Video Game Industry. Finally, the paper N Degrees of Separation: Multi-Dimensional Separation of Concerns was presented and won the the most influential paper award.

On Friday, Pamela Zave was the last keynote and presented Software Engineering for the Next Internet. After Pamela’s talk, I participated in a session on Multicore Software Engineering and saw some challenges in the area and other papers such as:

  • Does Distributed Development Affect Software Quality? An Empirical Case Study of Windows Vista (Winner of ACM SIGSOFT Distinguished Papers Award); and
  • How to Avoid Drastic Software Process Change (using Stochastic Stability), and
  • Do Code Clones Matter?.

It was the report about this ICSE.

Next year, see you in South Africa.

P.S: I did not see the keynote presentations on the website. However, all the keynotes sent me it after my request. So, try it too.

Wednesday, March 18, 2009

IEEE Software - Top List

"From its start in 1984 through 2008, IEEE Software published more than 1,200 peer-reviewed articles".

In order to celebrate 25 years of publication, they prepared a very nice list of 35 highly recommended articles based on several issues of software development. I read some of them but sure I will do the full list.

The list is here. Enjoy and spread this very useful knowledge in any project.

Wednesday, January 21, 2009

Revisiting Parnas: Use of the concept of transparency in the design of hierarchically structured systems

In year 1975, Parnas publishes with D.P. Siewiorek “Use of the Concept of Transparency in the Design of Hierarchically Structured Systems”.  The publication talks about the difficulties in using an Outside In (aka Top down) approach to design and develop software. The main point discussed in the piece is the cost of using abstraction in software constructions.

For the authors, the use of abstractions is an excellent way to make big systems understandable as a whole, as higher level abstractions hide the inner workings of a piece of software. The approach that starts from the outside in can have some difficulties, however. (1) The difficulty to obtain a good specification of the “outside” and (2) even harder to express it without implying internal design decisions, (3) the derivation from such a specification is frequently not feasible, (4) inner details of the implementation can already be fixed, such as hardware or an operating system.

The term Transparency is then discussed. Considering a two level system, say a lower, hardware level and a higher, control software level. Transparency is the measure of how much of a lower level capability is available at a higher level. Complete transparency means that if it is feasible in a lower level tier, it should be feasible in an upper level tier. When a design decision restricts the possibilities of a lower level tier when used through an upper level tier, there is a loss of transparency. For instance, if our Data Access Object layer only permits data selection from the database, there is clearly a loss of transparency as the ability to insert and delete data was suppressed.

Complete transparency is not always a good thing. There is a trade-off between transparency and flexibility of a design. The increase of transparency between two levels can lead to great implementation difficulties and inefficiencies. The designer should be aware and ponder. As stated: “Loss of transparency is often one of the goals of a design”.

Concluding on the difficulties with the outside in approach, the authors affirm that usually the design comprises many inter-related objects. Moreover, there is limited experience with man-man symbiosis, so it is often impossible to specify the outside before construction and not want to change it afterwards.

I would say that we still have limited experience with human-human symbiosis, what one could name as managerial issues in software development. Also, there usually is a lack of engineering expertise, where software designers and developers forget about key principles stated decades ago.

Tuesday, September 23, 2008

Six Steps to Develop a Good Survey

Often, we researchers and scientists have to understand, evaluate, and learn different methods, processes, techniques, technologies, and so on. We discussed previously the importance of empirical studies in the area. On the other hand, another important and probable the most used research method are Surveys.

If you take look, we are often asked to participate in surveys in our life, in different roles such as: electors, consumers, service user, and so on. Doing research, many times we have to design a survey to understand and characterize some particular phenomena. However, in some situations, we forget that there is a bunch of important material about it published in other sciences and, specially, in the software engineering area.

I am designing a survey to characterize the state of reuse measurement based on expert opinion and the series of paper published by Shari Pfleeger and her colleagues were and are being extremely valuable. Thus, if you need to understand, design, construct, and evaluate a survey, I strongly recommending these papers [P1, P2, P3, P4, P5, P6]. These papers gathered experience in the field during the last years and are extremely important for someone interested in this activity.

Regarding to reuse are, I recommend for example two papers in this direction [P1, P2]. The second one presents the state of software reuse in Brazil conducted by RiSE.

Friday, September 5, 2008

Computer Science and Software Engineering Ranking

Many times, I have met students and they ask me a similar question: What are the best universities and research centers in computer science and software engineering? Who are the main researchers in the software engineering area?

It is a complex question. However, some recent research addressed by important researchers can present some insights. These research were published in the main journals in the field and sure offer a good baseline.

Think in some universities, research labs, and researchers for computer science and software engineering around the world, write your opinion and check the results here [R1,R2, R3, R4]. However, it is important highlights that it is not a final and universal ranking.

Wednesday, July 30, 2008

Empirical Studies: On software and sharks

Software has become part of our society and it is found in products ranging from microwaves to space shuttles. It implicates that a vast amount of software has been and is being developed. On the other hand, organizations are continuously trying to improve their software process in order to achieve their goals.

Nevertheless, when the improvement proposal has been identified, it is necessary to determine which to introduce, if applicable. Moreover, it is often not possible just to change the existing software process without obtaining more information about the actual effect of the improvement proposal, i.e., it is necessary to evaluate the proposals before making any changes in order to reduce risks. In this context, empirical studies are crucial since the progress in any discipline depends on our ability to understand the basic units necessary to solve a problem. Additionally, experimentation provides a systematic, disciplined, quantifiable, and controlled way to evaluate new theories. It has been used in many fields, e.g., physics, medicine, manufacturing; however, in the software engineering field, this idea started to be explored in the 70s with the work of Victor Basili from University of Maryland. Currently, we can see conferences, books and other efforts in this direction.

At RiSE Labs, we are working hard in this direction. All of the research before being introduced in an industrial scenario should run an empirical study about it showing the benefits and drawbacks. Sometimes, the students and researchers arguing a lot before seeing and understanding the benefits related to it. However, if we look for other examples in the real world, we can see that it is a standard procedure and is being handled seriously.

Last Sunday, Discovery Channel started the Shark Week, a special program with a bunch of information about sharks. I did not have the opportunity to watch the first day, but on Monday I did it. In the Day of the Shark program, researchers from several places around the world were performing important experiments involving sharks attacks. I will explain some of them here and maybe it can be useful:
  • What is the best approach in a shark attack? Stay together in a group (like in cartoons. Everybody praying together) or separate, each one trying to save yourself? The experiments identified that is better to stay together. The researchers used fake humans (toys) to perform the experiments and identified it. As in a software engineering experiment, you have many threats to deal (clothes for the toys, lack of signals that human have and sharks’ sensors can recognize, types of sharks, sharks habits….)
  • Can the sharks attack during the day and night in the same way? Yes, the experiment showed that sharks are not concerned about day or night.
  • Other important experiment was performed in order to design devices to avoid sharks attacks. The researchers analyzed the use of electrical devices and a type of gas and these ones were able to scare the sharks for a while. This way, you have some minutes to find a boat, reef, etc.
  • In another experiment, they designed a material to be resistant for a shark bite. The material was very resistant, however, it was destroyed by the sharks and they are planning a replication with some adjusts. Can you imagine yourself trying this solution before its experimentation?
The program was very impressive and interesting and the researchers presented important findings, for example, comparing the surfers’ dilemma: stay quiet or try to get out with the surfboard. Nevertheless, the experiment which was more impressive to me involved two researchers rounded by sharks [It can be disgusting to watch].

The goal was try to understand the sharks’ behavior. However, one of the researchers had part of his calf bitten by a shark. That was terrible. Although, after the accident, the same researcher replicated the experiment trying to better control the variables (just him in the water, cameras away from him – the guys were in a rock) and obtain more findings about it. In this replication, he did not have problem.

As you can see, empirical studies are critical to improve the science and the way to understand part of the world and its elements. With luck, in some situations in the software engineering area, we do not have to deal with human in critical conditions to run experiments.

In addition, I agree with David Parnas when he said that we cannot do experiments for everything that we are defining (or we just have to do it for the next years and forget our current activities), however, in some situations they are very important to present evidences about something.

Saturday, April 19, 2008

21st IEEE Conference on Software Engineering Education and Training (CSEE&T)

Last week, I participated in a good conference the 21st IEEE Conference on Software Engineering Education and Training (CSEE&T). I said good because in this conference there was 3rd Academy for Software Engineering Educators & Trainers (ASEE&T). I did not know both and especially the second was incredible. Incredible because we have lectures with Barry Boehm, Victor Basili e Dieter Rombach. That was very nice. The slides are there. You can see it.

About CSEE&T, it was very good also. It is a good conference to discuss about software engineering education and training in general. There, we had discussions about games in software engineering, agiles and formal methods, etc. The keynote speakers included Bertrand Meyer and Watts Humphrey.

I was there presenting part of our experience teaching software reuse with the paper: A Case Study in Software Product Lines: An Educational Experience. It, in general, was well received by the software engineering education community with some questions. Next year, the conference will be in India.

Monday, October 22, 2007

XXI Brazilian Symposium on Software Engineering (SBES)

Last week I participated in the XXI Brazilian Symposium on Software Engineering (SBES) in João Pessoa, Paraíba. The event had several technical sessions, presentations, tutorials and panels in different software engineering areas. Furthermore, the SBES 2007 had workshops and the tools session, in which the RiSE Group were represented by two tools: ToolDAy – a tool that aids the domain analyst during the process, providing documentation, model views, consistency checker and reports generation. And also by the LIFT – which extracts knowledge from legacy systems source code, in order to aid the analyst in understanding the system’s requirements.

Among other activities, the SBES had a panel with RiSE group coordinator Silvio Meira (C.E.S.A.R and Federal University of Pernambuco), Don Batory (University of Texas), David Rosenblum (University College London), Claudia Werner (COPPE/ Federal University of Rio de Janeiro) and Itana Gimenes (State University of Maringá) about Academic and Industrial Cooperation in Software Engineering. The panel rose questions about why there is so little cooperation between them, and how it can be improved. In this panel the LIFT tool was cited several times as a successful example of cooperation between Academy (Federal University of Pernambuco) and Industry (C.E.S.A.R and Pitang Software Factory)

Thursday, October 18, 2007

Extracting and Evolving Mobile Games Product Lines

We had a discussion about a paper published in the SPLC 2005 with the title: “Extracting and Evolving Mobile Games Product Lines”. This is an interesting paper which describes a practical approach of software product lines involving Aspect-Oriented Programming (AOP). The author describes a scenario with three different mobile games, and, using AOP, extracted aspects from the code. He based his idea in the fact that mobile applications usually have crosscutting concerns related to hardware restrictions and different features from one handset to another such as, screen size, pixel depth, API restrictions, etc. So, this scenario would be well managed by using AOP. The article also defines some rules for code refactoring to obtain the aspects and suggests a manner to maintain the traceability between features and aspects. Other works tried to implement the same idea of the one mentioned, but the challenging scenario of mobile applications makes it pioneer.


The biggest problem involving SPL and the mobile domain is possibly the chaos involved with the platform. We commonly see different manufacturers implementing the same platform in two different ways. Or there are processing restrictions that make the platform implementation to work differently from the specification. It seems that the mobile application domain is still in chaos because the market share rivalry and the run for lower prices are most important than establishing a standard platform for applications amongst manufacturers.


My opinion is: in a short period of time, handsets computing power will be enough to establish a SPL with the same sophistication of a SPL applied to desktop applications. And then, the major problems we face now with mobile domain will vanish and the problems will be related for any domain chosen.

Wednesday, September 19, 2007

No Evolution on SE?

Two weeks ago I have participated in the EUROMICRO CONFERENCE on Software Engineering and Advanced Applications (SEAA) which was held on August 27-31, in Lübeck, Germany. Since 2005 I have participated in this Conference (2005 was held in Porto/Portugal and 2006 was held in Dubronick/Croatia).

This conference has a very interesting public from a set of software companies, such as Philips, Nokia, Sony/Ericsson, HP, among others and a set of recognizable institutes like Fraunhofer Institute, Finland Research, C.E.S.A.R., among others. In this way, interesting discussions and partnerships (with the industry and academia) usually takes place.

I have presented two papers there: (1) a paper about software component maturity model, in which I described the component quality model and the evaluation techniques proposed by our group in order to achieve a quality degree in software components; (2) a paper about an experimental study on domain engineering, which was an interesting work accomplished by our group together with the university in order to evaluate a domain engineering process at a post-graduate course. Some researchers that watched those presentations believe the component certification is the future of software components and like the work that we have been developing because this area is vague, sometimes. On the other hand, the researchers liked the experimental study report and commented that this is an interesting area that could be improved in order to increase the number of proved and validated works (in academia or industry) in software engineering area. The experimental engineering area has received a special attention in the last years by the software engineering community due to the lack of works and the difficulty to evaluate the software researches.

A very interesting keynote speech was given by Ralf Reussner who started his presentation with the question presented on the title of this post (No Evolution on SE?). He told that since NATO Conference (the first Software Engineering Conference) we have seen the same questions/problems in the Software Engineering Conferences around the world like software project management problems, requirements changes, software project risks/mitigation, software reuse aspects, among others. Thus, the problems continue to be presented and discussed until nowadays.
Additionally, an interesting topic pointed out by Ralf Reussner is why we don’t have any books from other areas like “Heart Transplantation in 21 Days” or “Nuclear Weapons for Dummies”. So, in our area the science/engineering is not considered like other sciences/engineering. Perhaps this is the reason why we have been discussing since 1968 until now the same problems and questions about software engineering. And the question remains… “No evolution on SE?”