6-Essential-Usability-Skills-Every-Developer-Needs

I remember a conversation I had some weeks ago with a friend about how Facebook came to replace a lot of social media websites we were in love with at the time of its launch. We both concluded that Facebook came and gave us all an experience and interaction we couldn’t get on the existing platforms back then.
User’s behavior changes with time. Their needs also vary from generation to generation. When it comes to using gadget and apps to interact, users don’t want to think too much to use a tool. It’s as if there is a “don’t make me think” chip planted in our brains. We just don’t want to do too much to achieve a lot while using gadgets and apps.
While there is no problem with users the way it sounds, there is a big challenge for companies rolling out apps and gadgets for users daily. Their profitability and growth relies solely on user’s satisfaction, experience and engagement.
While some companies think that users don’t really know what they want, you may be surprised to learn from our research at Zynar.co that users actually have an expectation of the kind of experience they love when they are using an app or a gadget.
Most apps and startups that have failed would attribute their failure to one thing; the user. It’s either that they have high churn rate, high bounce rate, low engagement, low downloads and the list is endless.
So what do users want really? I can tell you point blank that the best UX expert may try to craft an awesome UI but fail at getting your product to the promise land of “user Wowness”. This “user Wowness” can only be determined by real users when you create a culture of building interface and experience defined by users from the start of the product design and conceptualization.
ux-myths-facts-1080x650

I was involved in a user experience project for a popular website with over one hundred million users at that time. They had an issue of low download and engagement. In fact they were stunned to discover that a particular country where they were expecting high downloads and signups could only give them a couple of million downloads after millions of dollars was spent on advertisement and PR.
They decided to gather users in this country together for a user testing jam. Voila! The moment of truth came when they discovered that the choice of words used to onboard users was totally misunderstood by the users in that country.
After the user testing project was concluded, they took the feedback home, discussed it and pruned out insights based on the most popular feedback. Shockingly, in a matter of weeks, signups went up by more than 1000% and that was the beginning of their growth in that country.
Zynar User experience testing helps organizations to get into the minds of users before and after releasing products to the market place and get insight into how users would use app and gadget.
This is one sure way of validating products and concepts before burning huge chunk of funds on advertisement and PR.

apple


Unlike many of the other major tech companies, Apple has never had a formal bug bounty program or corporate policy for welcoming outsiders who poke holes in their security features. However, as TechCrunch reports today, Apple’s head of Security Engineering and Architecture Ivan Krstic announced at Black Hat that his company will now offer cash bounties of up to $200,000 for hackers and researchers who find and report security flaws in Apple products.

The announcement came during Krstic’s larger talk about the security features built into some of Apple’s newest services. The company usually sits out the popular security conference in favor of keeping big announcements limited to WWDC. The company now says they’ve reached the point where its own internal testers and even contract security firms are having difficulty finding more bugs.

According to Securosis CEO and iOS security analyst Rich Mogull, the bounty is “the largest potential payout I’m aware of,” but also fairly limited in scope: the guidelines focus on a very specific set of vulnerabilities and Apple is currently working with a select list of researchers. (Although, the company says if someone outside the initial group finds a bug, they can easily be included in the program.) The highest level bounty covers bugs found in secure boot firmware components, but there are also smaller bounties for gaining unauthorized access to things like iCloud account data — a major talking point after the infamous celebrity photo hack.

While $200,000 might be high for an official corporate bounty program, it’s still only a fraction of a payout like the $1 million the FBI reportedly paid hackers to break into an iPhone owned by one of the shooters involved in the San Bernardino incident last year. And such high bounties can also be detrimental to security research in general. On the other hand, Twitter is a more secure place thanks to some $322,420 in bounties it has handed out over the past two years, and a bug bounty from Instagram made one 10-year-old Finnish kid $10,000 richer.

Via: TechCrunch
Source: Securosis
In this article: apple, blackhat, bugbounty, culture, hacks, icloud, ios, security, securityresearch

Screen-Shot-2016-02-16-at-12.55.27-AM

1. Focused Development Team
One thing is sure, every software development team assigns tasks or modules to each member of the team and brings every bit together after each member develops his/her own bits or modules.
Asking member of your development team to test modules will mean a functional part of your development team is out and then you are short of one developer or engineer. This means that meeting your deadline and target becomes a mirage.
Crowdsourced software testing platforms like www.zynar.co helps you to optimally work with every member of your development team and helps them to focus on the tasks ahead rather than bothering about unit testing or overall testing of the developed app

2. Cost effective testing
You don’t need to pay through your nose. Crowdsourced software testing is cost effective because you only pay for bugs that are approved based on your testcase or your project description as we call it at Zynar.

b92c1c5eb610cd5d86ba7882ba2f7877
3. You gain first set of customers or users

One good advantage of crowdsourced testing is that your first beta testers are your first customers or users and your can experiment about your product or service as much as possible with them without fear but with assurance that your app will definitely get better.

4. Multiple locations and devices
When you test in a lab or within your organization, there are many things you may never consider because you have the ideal environment for your software to run and you have limited devices too. The moment of truth comes when your app is tested on devices you never thought exist. Slow networks or environment that is not friendly to your app is another factor. Crowdsourced testing gives your testing this opportunity because testers are from dispersed locations and will report on your application based on their experience of it

5. Expertise in different areas
Another advantage of crowdsourced testing is the opportunity to get your app tested using testers with different backgrounds and skills. These professionals are unwavering in reporting quality bugs to help you application’s quality assurance, localization, security and user experience.

6. Flexibility and Scalability

Crowdsourced testing is flexible as it can fit in to any software development environment and testers are always ready to test at every stage of your development.

Lastly, crowdsourced testing has been around for a long time and many tech giants like Facebook, Microsoft and Google have taken advantage of it with great results.

zynar_africa_software_testing_crowdsource


I remember when I was introduced to the idea of performance testing, and even the performance tester as a role.

Our company had one person whose sole focus was discovering performance and load-related information. My work, on

the other hand, was with a team of seven testers.

My cube was right across from our manager and if my manager wasn’t watching his voice, I’d likely get an early scoop on new initiatives. The company wanted a person from our group to do performance testing, and that person wasn’t going to be me.

In hindsight, they chose the right person. He did a great job. But I did always wondered why it wasn’t me.

Let’s take a look at some of the differences in mindset and skill between performance and functional testers.

1. People vs. events
When I test software, I like to use personas — short but authentic descriptions of someone that might use my product. For example, knowing that Sarah is a regional sales manager is important. Knowing that Sarah graduated high school but did not go on to a university, has a five-year-old mobile phone, and spends 300 days a year on the road is a game changer.

All of these details guide how we test and the information we look for. We have a better idea of what she values, and what could make a long road trip for sales more tedious.

People are central to performance testing too of course but the persona isn’t the guiding light. Performance testers look for the extremes in life, the highest highs and the lowest of lows.

Instead of the persona, performance is about events. Instead of Sarah, performance testers at Amazon think about Black Friday, the largest online shopping day of the year. They think about the flood of traffic happening online, and the millions upon millions of people hitting Amazon.com in the 24 hours after Thanksgiving to do their Christmas shopping.

This is what performance testers live for — Black Friday, the Super Bowl, major news events, or a tweet from a celebrity that is so viral it causes a server failure. Each of these represents a scenario that if not tested for, can take a site down and cause millions of dollars in sales to go to your competitor.

2. Focus on the task
General testing work is extremely varied.

On some days usability is important and I spend time reviewing user studies with someone that is good at design and experience. Other days I think the world is a big input combinations exercise, and I spend all day exploring the data that I should and shouldn’t be able to submit in a field, learning about the consequences of ‘bad data’.

And on others I dive into a programming language and figure out how to build some change detection scripts for an API that can be run every time a code change is committed to the source code repository.

Performance on the other hand, or at least the dedicated performance person, is a highly focused job. There are few long-lasting themes in the work of a performance tester.

These include: benchmarking, finding the combination of data and concurrent users that brings your product to a tipping point of slowness, and the never-ending question of whether or not the observed performance difference is OK.

When new information about a feature starts coming in slower and slower, I like to either change my focus to something other than functionality or move on to a different task all together.

Performance testers remind me of developers. The bigger focus is on adding value to a project, but the work is split. It is half “testerly”, information discovery, and half “developerly”, creating tooling that will help other people.

3. Working environment
The agile teams I have worked on usually have developers, product people, and testers all mixed up into a team.

Being together like that can make information sharing easier. Instead of getting on Skype to talk to a developer about something I notice in the product, I just lean over this way and demonstrate the part of the software in question. And when the developer tells me he isn’t sure how it should work, I lean that way to talk to a product person.

The performance tester will, more often than not, lean in the direction of the developer. The people writing production code usually provide the most guidance about where to look for problems next, and are involved again later to see how their new database indexes, or reduced number of HTTP calls, or smaller data load, affected the performance profile.

The performance tester takes those answers and talks to operations about the impact, how to measure it, and what experiments to conduct if things work as they should.

The performance tester I talked about earlier was officially part of the test team. He had a cube with us, and when it was time for yearly reviews he talked with the same manager we did. About half of the week though, his cube was like the cube of a good sales person — completely empty because he was talking to people in other roles. Over time he had become an honorary member of the development team because of how closely he worked with the staff and management.

Eventually he moved in with his developer brethren and switched managers. Of course, all of this happened when teams were more often thought of as separate family units, each with its own special housing and parent. It might be hard to notice the shift visually today, no one would pack up their computer and move to a new part of the room or a different floor or even building.

Spend some time with the group though, and I bet you can tell who is who.

4. Technical leanings
Even in a co-located, cross functional team, people have their specialties and most performance people lean to the developer side of things.

The last time I did load testing, the name of the game was seeing how many users could simultaneously complete cases on an anesthesia documentation platform. The webpage with case workflow functionality was a little complicated. It took a popular load testing tool, and some JavaScript trickery, to get the scenario running and that was just to collect data. After that, there was the matter of reviewing HTTP calls, load times, and latency.

People with a development background, or at least some varied technical experience will have an advantage. On average, most testers aren’t from a programming-heavy background. Often, they come from a liberal arts program at a university — maybe English literature, history, or philosophy — work in an unrelated field for some time, make a career change that lands them on a software support team or maybe product management.

There are of course variances there, but that pattern isn’t unheard of and usually turns out quite a few good software testers. Testers that go the liberal arts route have fantastic varied background and perspective on software; that is part of why they make such good testers.

Unfortunately, they don’t usually have the full immersion technical experience of a programmer.

Someone with a year or two of programming experience might not have to go home every night and spend hours reading about HTTP, practice learning a new programming language, and figure out how to use a (sometimes) cumbersome tool all at once. If they don’t already know the basics, they’ll at least have intuition on where to look for information from previous experience.

That doesn’t mean “non-technical performance tester” is an oxymoron — but it will be challenging to work in a role that’s focused on strategy, and also requires them to communicate with the people from Ops, database, programming, and other roles to figure out what needs to be tested, who will test it, and how to get the data into a format where non-technical performance tester can analyze it. It can be done, but more often than not this is a consulting role; teaching the team how to analyze, then transitioning off to another team (at a larger company), or a different company entirely.

Mindset is a very subtle thing. It can be hard to say what that means for different roles in a software team, partly because there is overlap and partly because it is built into how we interact socially. But, I can always see the little personality differences between a tester, a programmer, and someone in between the two like a performance person.

Credit: http://blog.smartbear.com/

Zynar

 

Software testers had a lot to keep up with in 2015. And as you may expect, 2016 won’t be any different.

As we start the New Year, we wanted to share five predictions for new trends and developments in the world of software testing in 2016.

1. Python will continue to grow in popularity

For many who are new to coding, Python remains the first language choice as it is easy to learn, looks like every day English, and is human readable.

As more manual testers move to automated software testing, and start learning a programming language for the first time, the use of powerful, easy to learn languages like Python will continue to grow.

2. Mobile app testing will go mainstream

As app store approval processes become more stringent, mobile app testing will continue to grow in prominence. As a result, releasing into the wild and waiting for customer feedback to fix issues will become more difficult.

On the flip side, on-premise and cloud-based device farms will be required to reduce the infrastructure needed to test these apps.

In 2015, we saw a lot of investment in on-premise and cloud-based device farms. And, with the recent entry of Amazon’s AWS Device Farm and Google’s Cloud Test Lab, pricing pressure on other commercial vendors to provide device labs will continue to increase — resulting in good news for testers.

3. The role of open source will continue to grow

We saw a lot of investments in open source over 2015.

This started with Apple making the next version of its programming language, Swift, open source, and was followed by Microsoft taking the same path with its .Net platform. From a testing point of view, open source tools like Selenium and SoapUI continue to grow in systems. For ultimate long-term success, commercial vendors need to increasingly play and be a part of open source movement.

SmartBear was early to recognize this trend and, as a result, released TestComplete’s integration with Selenium in 2015. In 2016, we expect to see more and more commercial vendors embracing open source.

4. Agile & Continuous delivery will grow in prominence

As delivery cycles continue to get shorter, testing will need to get integrated into the development cycle. This means that developers will be more involved with testing.

In order to cater to this audience, testing tools need to become more developer-friendly. To do this, testing tools will need to increase their focus on integrations with integrated development environments (IDEs), and other tools that help with continuous delivery.

5. Expect more conversations regarding containers

Microservices, Containers, and Docker got a lot of attention in 2015, given the scalability and portability benefits associated with containerization.

From an agile testing perspective, Docker will continue to get lot of attention, especially when it comes to solving testing challenges that arising from different environment configurations in development, testing, and production.

Credit: http://blog.smartbear.com/

feature2-image

Apps developers in Africa and across the world have a reason to smile as a new platform has been launched to help them gain traction they need to drive user growth while testing their apps for bug, user experience and security.

Zynar.co is a platform built to help organizations in the technology industry test their Apps/Software before final rollout and gain initial users at the same time.

Most startups struggle with getting the first set of users they could leverage on to further their projected growth and traction.

Zynar.co uses crowd source method so you have users all over Africa or your chosen location to use your app and submit bugs, reviews and even insight on growth and user experience.

The interesting thing is, Companies only pay for bugs approved by a project manager assigned by Zynar.co. This many industry watchers say will help in engaging our youths who are already IT savy to become trained software testers.

Zynar.co is rolling out series of online training for youths and interested people on software testing after which they will go through sandbox to test their skill and get on boarded into the community of testers.

What makes Zynar unique is the fact that they make their entire workers or testers to sign a non disclosure agreement. This means that every test result and information about the company are kept secret between all the parties involved.

Categories