Sep 12, 2014

Using Sikuli to test Windows Phone 8.1 Application

Test automation tools for regression testing in mobile applications have been increasingly sought. For some platforms is very difficult to find good free tools such as Windows Phone 8.1 platform. To automate this platform effectively you need to invest in expensive testing tools. But if your company doesn't have budget to invest in the purchase of tools for test automation, It is possib;e to perform a feasible way to automate using a free tool called Sikuli.

Sikuli is a free tool picture driven developed by MIT using python programming language. This tool allow to automate regression tests, you can find the tutorials and presetations in the sikuli web site.

The requirements to use Sikuli to test windows phone are:

  1. Use a computer with S.O. Windows 8.1
  2. Download and install Sikuli in your computer.
  3. Download and Install Project my Screen for Windows Phone available on microsoft page ->
  4. Connect your mobile device Windows Phone 8.1 and project the screen on your computer. Example of how to project the screen here.
Steps to automate tests using Sikuli:

  1. Open the Project My Screen and plug your Mobile Phone Windows Phone 8.1 by USB port.
  2. The Mobile Phone' home screen must to appears ins your computer. 
  3. Open Sikuli program in your computer.
  4. The Sikuli IDE must be opened as showed in Figure 1.
  5. The left bar shows the commands to be used in Sikuli.
  6. Click on command "Click()"
  7. Now Select the area of the screen of your Mobile Phone . For example, select the icon of your application (To open the application to be tested).
  8. The sikuli will script the command with the image selected. Click (image)
  9. Now you tap on icon of your mobile application to open it.
  10. Then, back to Sikuli and choose the command Click() again.
  11. Select the application area, for example a button.
  12. Tap on button select in your mobile phone.
  13. In Sikuli, you can click on command Find() 
  14. Select an image of your mobile application or a message for example.
  15. Go to your mobile initial state, for example home screen.
  16. In Sikuli click on button Play to reproduce your steps and observe it executing.

Figure 1 - Sikuli IDE

The video attached will shows how the steps recorded can be reproduced using sikuli.

  • The Project my screen must be opened in the same size and position in your computer to be exactly reproduced.
  • The home screen of your mobile phone and localization of icons and image must be the same of the mobile recorded.
  • The Sikuli will recognize the application through the image and position.
  • You can use python language to improve your test script , example: 
if find(image)

    print "Test Passed"

else print "Test Failed"

May 15, 2014

Testing Android Applications using MonkeyRunner

Monkeyrunner  is a tool from Android SDK. It provides an API using commands to execute tests in Android devices or Android emulators. It is possible to write a Python program that installs an Android application and perform it actions. 
The API has the following classes of commands: 

  • Monkey Device: it provides connection to a device or emulator.
  • Monkey Image: It can run packages, provides methods for installing and uninstalling packages, start an Activity, and send keyboard or touch events to an application.  
  • Monkey Runner : It captures screen image, comparing two MonkeyImage objects and writing an image to a file.

MonkeyRunner Example

To use the  monkeyrunner, it is necessary to include in the environment variable PATH the android tools directory. 
... \ sdk \ build-tools \ android-4.4; 
... \ sdk \ tools 

  • Create a folder called monkey in the c drive of the computer and put the scripts available here: 
  • Create subfolders images, apks-script, log and apks_to_test

  • Open the emulator android or conect your Android device in USB (usb_debug must eon)
  • Open a command prompt and type: 
  • monkeyrunner c: \ monkey \

The interface monkeyrunner recorder will appear. 
With the mouse, try to perform actions on the emulator screen, like opening an application. 
The actions will appear on the right of the recorder interface.

MonkeyRunner Recorder

In the recorder interface , save the scripts clicking on the menu Export Actions
- Save the script as a file extension .mr ( example) 
- Close the recorder interface and reopen the command prompt 
- Type: monkeyrunner c:\monkey\  c: \monkey\ 
- Press enter and note that the emulator will play back your recorded actions on Emulator or in the Android device.

If you prefer, it is possible to create scripts directly in python and run. in monkeyrunner.

Follow the python code bellow, it tests installation of an application, it simulates the buttons' actions Back, Home and Menu and then it generates a log file and saves screen image.  

class DroidTest:

    rootfolder = 'c:\\monkey\\'
    apkDir = rootdir + 'apks_to_test\\'
    imageDir = rootdir + 'images\\'
    logDir = rootdir + 'log\\'

    def __init__(self, device, count):
        self.device = device
        self.count = count

    def run(self):

        for apk in glob.glob(self.apkDir + '/*.apk'):
            print "Testing apk %s" % apk
            packagename,activity = get_package_activity_name(apk)
            componentname = packagename + "/." + activity

            apk_path ='pm path ' + packagename)
            if apk_path.startswith('package:'):
                print"App is already installed."
                print"App not installed, installing APKs..."

            print componentname



            image = self.device.takeSnapshot()
           image.writeToFile(self.imageDir + 'screenshot_' + packagename  + str(self.count) + '.png','png')

            #Simulate Device Events Home, Back and Menu buttons


            log(self.logDir + 'test' + str(self.count) +'.log', self.device);


if __name__ == "__main__":

- Name this code as 
- Put an apk in the folder \apks_to_test
- Start the emulator or Android device and then type the following command in prompt:  monkeyrunner c:\mokey\

The python program will run and it will execute the buttons tests.

May 8, 2014

Agile Practices to Help Mobile Testing

The growing trend to use mobile devices and software applications to everyday activities and communication resulted in popular use of mobile software applied to many kinds of devices platforms. There is the demand nowadays for agile and flexible testing procedures in the software companies. In this post I will describe an experience to implement some agile practices for mobile application testing.

Mobile Application Testing

Mobile Application Testing is testing using well-defined software test methods and tools to ensure quality in functions, behaviors, performance, and quality of service, as well as features, such as mobility, usability, interoperationability, connectivity, security, and privacy [1].
Mobile Application testing has some peculiarities that include connectivity, convenience (Quality of Design), supported devices (diversity of devices and OS), touch screens, new programing languages, resources constraints and context awareness.There are challenges for testing process mainly for test selection and test execution because of the rich contextual inputs and lack of test tools [2]. 
According to the technical literature, the important types of testing in mobile application are [3]:

· GUI Functional Testing: Functional testing ensures that the application is working as specified in the requirements. 

· Performance Testing: This testing is undertaken to check the performance and behavior of the application under certain conditions such as low battery, bad network coverage, low available memory, simultaneous access to application’s server by several users and other conditions.

· Memory and Energy Testing: memory leaks may preempt the (limited) memory resource, and active processes (of killed applications) may reduce the device (battery) autonomy.

· Interrupt Testing: An application while functioning may face several interruptions like incoming calls or network coverage outage and recovery. The different types of interruptions are: Incoming and Outgoing SMS and MMS, Incoming and Outgoing calls, Incoming Notifications, Battery Removal, Cable Insertion and Removal for data transfer, Network outage and recovery, Media Player on/off, Device Power cycle.

· Device multitude Testing: Testing an application on a multitude of mobile devices.

· Usability Testing: The usability verifies if the application has friendly interface, achieving its goals and getting a favorable response from users. This is the key to commercial success.

· Installation testing: This testing verifies that the installation process goes smoothly without the user having to face any difficulty. This testing process covers installation, updating and uninstalling of an application.

· Certification Testing: The mobile application needs to be tested against the guidelines set by different mobile platforms to be available in an application store, example: Nokia Test Criteria, IOS Testing criteria, Microsoft Test Criteria and etc.

These types of testing required impact the test activities during application development, especially in agile methodologies.

Experience: Project Description

The project X was a mobile application to help to navigate cities giving the access to scheduled departure/arrival times from public transportation.
The project followed the Scrum development framework and the software platforms used were J2ME programming language and Eclipse IDE. This project had two deliverables for different mobile platforms. The project team had 4 developers, one Scrum master, 1 Product Owner, 2 designers and 2 testers.
Initially, developers were responsible for code new features, unit tests and Continuous Integration (CI), and Testers should take care of functional tests, non-functional tests, and certification tests for mobile applications.

 Testing Process

The testing process was designed initially following the traditional sequential phases: Test Planning, Test Specification, Test Execution and Test Report. The test tools used were: Testlink to manage test cases, Jira to manage failures and J2ME Unit to automate unit tests. In the beginning of project the test team and design team were not allocated along with the development team despite belonging to the same company.
In early iterations of the project development, some problems were identified as: slow response to changes in the project requirements, poor and insufficient unit testing, little time to execute functional and non-functional testing and unstable application running into target device.
In order to solve these issues and accommodate agile development iterations with all types of testing required, the test process was revised and some agile practices were implemented as following:
·     Team Co-location: Testers, designers and developers allocated in the same room to join the team and encourage cooperation.
·     Pair programming: Developers and Testers together in pairs to implement unit testing to improve test coverage.
·    Pair Testing: Testers together to implement and execute exploratory tests and non-functional tests (Performance and Security).
·    Prioritization of regression testing: the regression testing was executed following of most critical features to be delivered (covering Certification Testing).
·     Designers involved in interface tests to identify unconformities of screen flow and screen design.

Fig. 1. Testing Process Activities

The Figure 1 shows the testing process activities designed for mobile applications. 
The programming skills tester did the pair programming to improve the unit testing.
The Test Execution includes the execution of functional tests, regression tests, performance tests, security tests, stability tests, regression tests, certification tests and interface (GUI) tests. If failures were found, they were reported to developers, once the failures were fixed, the test team executes the retest to validate the faults. When all critical failures were fixed the test report is generated and sent to the project team.


Reaching the agile practices, the project team found the right way to work. The co-location improved the communication problem between testers, designers and developers.It also promotes fast feedback when changes were required. The pair programming solved the poor unit testing, since testers helped to improve the quality of scripts and test coverage. Running regression tests for prioritized features, involving designers in interface GUI tests and the pair testing approach made the cooperation of testing team happen. It allowed them to run all types of tests required for this mobile application on time. 

The table below shows the results comparison between a past application Project 1 with the traditional test process and the Project X that used the agile practices. Both projects had the same number of testers, developers and designers. The projects had 6 months of time to develop, test and delivery the application.The test team in Project X could execute more types of important tests for mobile applications and the project was delivered with stability and without failures, it was a really good result.

Number of Features
Failures Found and closed
Failures remain after delivery
Types of testing applied
Project 1 (Traditional Test Process)
   Unit Testing
   Functional Testing,
   Exploratory Testing
   Regression Testing
   Performance Testing
Project X (Agile practices applied to testing process)
   Unit Testing
   Functional Testing
   Regression Testing
   Performance Testing
   Security Testing
   GUI Testing
   Stability Testing
   Certification Testing
   Field Testing


   1.    Gao, J.; Xiaoying Bai; Wei-Tek Tsai; Uehara, T., "Mobile Application Testing: A Tutorial," Computer , vol.47, no.2, pp.46,55, Feb. 2014
doi: 10.1109/MC.2013.445.
22.    Kirubakaran, B.; Karthikeyani, V., "Mobile application testing — Challenges and solution approach through automation," Pattern Recognition, Informatics and Mobile Engineering (PRIME), 2013 International Conference on , vol., no., pp.79,84, 21-22 Feb. 2013 doi:10.1109/ICPRIME.2013.6496451
33.   Kumar, M.; Chauhan M.; “Best Practices in Mobile Testing”, White Paper Infosys, 2013

Aug 20, 2013

Starting Performance Testing with Jmeter

This post is about performing tests in a web server using Jmeter.
First, to start load tests or performance tests, some planning is necessary in the following activities:

  • Understanding web application architecture, network environment and business rules and needs.
  • Choosing tools to automate load/performance tests
  • Selecting user scenarios, identifying the critical points.
  • Paying attention  to the test environment, there must be a network dedicated to testing and a good machine to run test automation tools and tools to monitor the net and the CPU of the server. 
  • Recording selected scenarios.
  • Choosing adequate reports to analyse the data from execution.
  • Analysing and generating reports.

Using Jmeter

Jmeter is an opensource tool used in performance testing, load testing and stress testing. It can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types.

Jmeter can be downloaded through Apache Jmeter web site 

The Jmeter installation is very simple. It requires JRE/JDK correctly installed and the JAVA_HOME environment variable set. Just unzip the zip/tar file into the directory where you want JMeter.
In the Jmeter/bin directory you can run the jmeter.bat (for Windows) or jmeter (for Unix) file.

Jmeter Interface
First element is Test Plan, selecting and rigth click add the component Thread Group needed for the test.
TestPlan -> Add -> Threads (Users) -> Thread Group.

It's interesting to add the element called Recording Controller. It will help to organize the test scenarios according to functionality to be tested.

Right Click on Thread Group -> Add -> Logic Controller ->  Recording Controller

Then, it's good to add listeners to see the data generated by tests.

Thread Group right click ->Listener -> Agregate Gragh
Thread Group right click ->Listener -> Summary Report

If you want to collect isolated results, add listener just for Recording Controller.

Recording Controller right click ->Listener -> Agregate Gragh

After, the element HTTP Proxy Server must be added, it allows Jmeter to record the user requests and turns them into object Samplers.
  • Right click in Workbench -> Add -> Non Test Elements -: HTTP Proxy Server.
  • Open the browser and configure it to listen to the proxy server port which is set in Jmeter proxy server (default : localhost: 8080)
  • In Jmeter -> HTTP Proxy Server select in Target Controller your Thread Group and your Recording Controller created.
  • In Jmeter -> HTTP Proxy Server -> click on the Start button.
  • Execute the actions in the browser , they will be recorded by Jmeter.
  • Afterwards, access the web page and record the user scenarios.
When it finishes just go back to Jmeter and Stop HTTP Proxy Server. It's good to delete images, css files and scripts before running the tests.

Next, select Thead Group, in Thread Properties and set:
  • The number of threads: they are used to simulate concurrent connections to your server application.
  • The ramp-up period:  how long it take to "ramp-up" to the full number of threads chosen
  • Number of times to execute the test.
 Example: Set 10 threads and the ramp-up period 100 seconds. So, it will take 100 seconds to get all 10 threads up and running.

The next step is to select the Play button in Jmeter toolbar.

OBS: If you set a large number of threads for example 1000 and the Jmeter freezes showing Memory leak in command line. Verify the RAM memory of the computer and Edit the following line of Jmeter.bat :
set HEAP=-Xms2048m -Xmx2048m
For a good performance. It is recommended to execute a large number of threads in Jmeter using a computer with more then 4GB RAM.

When the execution stops, observe the Summary Report. It will show values on measurement.

  • Labelhttp request recorded.
  • Samplesnumber of http request ran for threads. 
  • Average:   average response time for http request. T
  • Minthe minimum response time taken by the http request. 
  • Max the maximum response time taken by the http request. 
  • Std.Deviation: how many cases were deviating from the average value of the receiving time.  
  • Error %: error percentage in samples during run. 
  • Throughput: number of requests per unit of time sent to server.

In Agregate Graph, the results are plotted and it is possible to identify bottlenecks of the http requests.
Agregate Graph

Mar 11, 2013

Experience on the Application of Distributed Testing in an Agile Software Development Environment

Software engineering is by nature a highly collaborative activity. However, this collaboration is more difficult when the teams are geographically separated, as several factors, such as work-time, cultural differences, communication, technical capability, among others, may impact on its success. Moreover, each activity in the software development process has specific needs in a distributed software development (DSD) environment.
In this post, I will report an industrial experience of a testing team separated geographically in the context of a software project that followed an agile method. This experience was possible through a collaboration between industry (Nokia Institute) and the federal university of amazonas, and its generated a paper presented in ICGSE 2012.Authors: Eliane Collins, Gisele Macedo, Nayane Maia and Dr. Arilo Dias-Neto.


A. Software Project Characteristics

This project had been conducted in the context of a research institute in Manaus/Amazonas/Brazil aiming at
developing a web application to manager advertising campaigns, called Project X. This software project was developed following the Scrum agile methodology , and it consisted of 30 stories (Product Backlog) divided into 9 Sprints (iterations) varying from 2 to 3 weeks. This project had a development team composed of 1
Scrum Master and 3 full-time developers. The Testing Team, the focus of this paper, consisted of 6 professionals, being 2 full-time testers located in the same physical environment of the developers and four part-time testers who worked in another site (geographically separated) as a result of a cooperation between the research institute and the Federal University of Amazonas (UFAM).
The Project X was developed using a web platform (PHP + MySQL database), and IDE Eclipse. The system was composed of 8 screens (4 forms to register campaigns and users and 4 screens to search and generate reports).

B. Testing Process

The Project X’s software testing process followed the Scrum methodology, where the testing team was integrated to the project team, participating in Scrum ceremonies (sprint review, daily, retrospective and planning meetings). Their responsibilities were to plan test cases through the stories described in the Product Backlog, specify the acceptance criteria, and use test automation tools to speed up the execution activities of each sprint.
Due to the geographic distribution of the testing team, the testing process was designed considering the characteristics of a DSD (distributed software development) environment. We needed to provide a testing structure allowing access to the project’s information by all team members. Thus, in order to attend this demand, the testing team used a server dedicated to be connected remotely with the following tools:
TestLink: used to manage test plans, write test cases, and report tests execution. Besides, the selection of tests to compose a test suite was done manually, TestLink acted as an editor and organizer of test cases, storing all information. It facilitated the creation of test plans and reports documentation, and the controlling of the tests execution versions;
Mantis Bug Tracker: through this tool, which was already used in the institute, the tester registered the
defects found, sent them to the developers and controlled the lifecycle of each defect. 
FireScrum: a web tool for Scrum taskboard was used to detail tasks defined for the project sprint. This tool was used for both testing teams (remote and local) to record and update the tasks progress. Thus, the test coordinator used these as data in the project daily meeting.
Subversion: it was used to share and manage information among the test team members. 
Moreover, in order to ensure the operation of this infrastructure and facilitate communication among the teams, a Test Leader located in the institute was responsible for defining the testing tasks, planning the sprint activities and reporting the progress of these activities at the daily meetings. Only the Test Leader could communicate with the development team, avoiding a larger communication network in the testing team. The remote testing team was allocated to the tasks of designing, automated (creating scripts), and executing test cases for the stories developed in each sprint.
The tasks of test process during the sprint were incremental and iterative following the scrum process. They are represented in Figure 1. The dark rectangles represent the Test Leader’s activities and the lighter rectangles represent the Remote Testing team’s activities.

Figure 1. Overview of  Testing Process

The task Incremental Test Execution includes execution of exploratory tests and automation of regression tests. For Project X project, the tool chosen for the functional tests automation was the Selenium RC and IDE5. Selenium uses the approach record-play support and tests web applications for the browser Mozilla Firefox. Selenium IDE can record the user actions in the browser, creating test scripts in several programming languages and executing them later. The Selenium Java API used in the project allows running the test scripts in other browsers such as the versions of Internet Explorer. The automated test suite is updated and executed in every sprint. The tools TestLink and Selenium were integrated, thus, when
test cases are executed in the Selenium tool, their results are recorded automatically in TestLink.

C. Communication Process

According to agile practices, communication is an essential factor for the success of the agile project. Project members who have good communication process can cooperate more and mitigate risks of changes during the project. It is an essential factor when some teams of the project are geographically distributed.
To work with distributed teams, it is important to use tools to facilitate communication. In this project we used several tools like e-mail, a free scrum tool for task board (FireScrum), and the chat and video-conference tool (Skype) to make possible online communication among team’s members, resolving doubts as soon as possible. Thus, e-mail was used only for offline communication (invitations to meetings and
project documents). 
Daily, all team members needed to access the online FireScrum tool, where the tasks were created and allocated to the testing team members by the Test Leader. The Remote Testing team was responsible for updating the FireScrum tasks every day, reporting the progress and impediments.
Figure 2 represents the communication flow among the project teams. According to the flow, communication between the Development Team and the Test Leader was straight (face- to- face) while communication with the Remote Testing team occurred through the server test tools.

Figure 2. Communication Network.

D. Tasks Allocation

Using the Scrum methodology, tasks are defined and estimated in Sprint Planning Meeting. In this ceremony, each professional involved must participate to specify the tasks necessary to accomplish the stories. All members were invited to this event, including the remote team. On presentation of the definition and clearly understanding the tasks, the Test Leader inserted them into the FireScrum tool and allocated them to the testing team members. During the sprint, the Remote Testing team could add and assign new tasks, when appropriate. With all communication tools available, any possible impediment from the remote testing team could have been quickly detected and easily solved avoiding delays. Each professional team was responsible for his/her external tasks and he/she had to communicate any eventual difficulty as soon as possible. The task status could be defined as: to do (to be made), in progress (ongoing task) and done (tasks completed).
Every day, before the daily meeting of the project in the company, the Test Leader checked the progress of tasks to inform the development team. Among the main tasks assigned to the Remote Testing team, we can cite: specification of new test cases, update of test cases from previous stories, creation and automation of
test scripts, execution of test cases and test scripts of the stories in sprint, defects registration, execution of automated scripts to regression tests and validation of solved defects.
The main Test Leader’s tasks can be summarized in: review created test cases, monitor tasks in FireScrum, facilitate test tasks, send project information and report test execution.

E. Planning and Restrospective Meetings

Following Scrum, there are two other important meetings to be performed: planning and retrospective. Planning is the meeting where the whole team decides to express their opinion about how to get the stories according to the prioritized backlog. In Project X, all testing team participated in this meeting (including the remote team), choosing the stories to be developed and estimating the complexity of the selected stories. The retrospective meeting is considered a very important ceremony because it is where is said what went right and what should be improved for the next sprint. All team members, including the remote testing team, participated in this event, sharing experiences and understanding the difficulties that would be improved in the next sprint. With the participation of the remote testing team in both Scrum ceremonies, it’s possible to get the unit of the project team and everybody feels as part of the team.


In this experience on conducting distributed testing in an agile software project, we could observe it can work very well. However, some issues need to be managed to avoid the risks introduced by the combination of these software engineering practices (DSD + agile practices). Thus, we identified some challenges and key lessons that we learned for minimizing the impact of the geographical distance between the testing teams:

A. Communication and Coordination are essential factors for the success of distributed testing All main research works in the DSD field indicate communication and coordination as important aspects to carry the success in a distributed software project, and we could also confirm these claims. Some practices were essential to reach this success:
• Allocating one person (test leader) as a link between the local and remote testing team is very important to avoid a large communication network and, consequently noises. Thus, all information and decisions would always pass through this professional, responsible for distributing information, solving impedances and communication problems, and making the testing tasks easier.
• A communication protocol should be formalized, regulating how the teams should keep contact with each
other. That includes structure of emails, communication tools, meetings, timetable, and so on. This is essential to avoid losing data, effort, and quality in the software project. For instance, the definition of a standard for test cases specification and bug reporting allows a tester to complete a task started by another tester or validate a bug reported by another professional.
• An online chat tool should be used and the key persons in the project (Scrum master, development and test leaders) should be always available to clarify doubts.
• Periodic meetings between the development and testing teams should be scheduled. This contributes to understand the complex user’s stories. Moreover, the physical presence of the remote testing team in the
Scrum meetings (Planning, Retrospective and Review) is very important to share, among all software project
members, information regarding problems, suggestion of improvements and planning of new activities.

B. The project information should be available with details to all members As part of the testing team is separated geographically, only short stories and acceptance criteria descriptions are not enough for test cases’ specification. Thus, user’s scenarios and the application wireframe should be available to cover all features to be developed/tested, bypassing the limitations imposed by the distance between part of the testers and the product owner. Moreover, changes in the user’s scenarios result in a high effort to update test cases and their automation scripts. Therefore, these scenarios should be kept updated, otherwise they can affect the quality of the testing activity.

C. Automation reduces the needs of physical presence in the testing process The tests automation has an essential role in the success of distributed testing, because both teams would be able to run the regression tests suite constantly without depending on time zones between the remote and local testing teams. Moreover, automation speeds up the time required for running the tests even when new releases are published by the development team close to the deployment deadline.

D. Supporting tools are always important, but testing team organization is more Tools are also highlighted in the technical literature as an important element in a distributed software project. In the context of distributed testing in an agile project, two special tools should be cited:
• A Scrum dashboard tool presents all activities performed daily by the remote testing team, showing information regarding the performance of testers and the tasks’ progress. Thus, the test leader is provided with information to support the testing process management, mitigating eventual risks. In our software project we used the FireScrum tool.
• A test management tool controls all information regarding the testing activities, test cases specification and validation, creation of new releases, test running, number of detected failures, and bug tracking (creation, correction, and validation). In our software project we used the TestLink and Mantis tools. Besides their importance, tools can fail, and we need to be prepared for this moment. If the network or a server crashes, all remote teams’ activities can be affected because communication between the teams will cease to exist. Thus, we cannot be dependent on tools to perform our activities. 


We could observe that it is feasible to integrate the DSD with the agile practices, maintaining management and organization of a software project, because some of them are considered in the testing activities for both scenarios, as the need of an efficient communication, automation as a resource to reduce cost, and task allocation in small parts. On the other hand, the difference in these scenarios regarding the testing activities, such as continuous integration and daily meetings, could be avoided with some technological solutions already reported in the technical literature.
Some challenges and lessons learned could be extracted from these experience and they could support other software engineers when performing testing in a similar environment.