Neueste Beiträge
The future of the PHP PaaS is here: Our journey to

In our team we’re very confident in our ability to produce high quality software. For the past decade or so we have constantly improved our skills, our tool stack, our working methods and consequently our products. We can’t say the same about our sysadmin skills. By a long shot. 

Sysadmins - because even developers need heroes

For many software developers, sysadmin is a mystery at best, a nightmare more often than not. When you frequently find yourself solving server problems by copy-pasting a series of commands you don’t fully understand, you know you’re not up to the task. This type of server maintenance is definitely not what anyone’s customer deserves, but neither do companies of our size have the resources to hire good system administrators. 

The DevOps illusion

Fully managed server solutions aren’t cheap either and don’t provide a lot of flexibility. For a long time we have solved the hosting problem by operating a small number of flexibly scalable VPS instances by ServerGrove who provide unique support so that we could always fall back on their knowledge when needed.

But we arrived at a point where we wanted to be able to spin up individual server instances for each project and each environment and in order for this to be replicable and robust it needed to be automatic. We wanted to be sure that our servers had exactly the dependencies we needed and that we could spin up as many instances as we needed whenever we needed them.

At the same time virtual machines and server provisioning systems started to gain popularity among developers. Everyone and their cat started talking about technologies like Vagrant, Puppet, Ansible, or Docker. There was the promise of a better world where devs would be able to have repeatable server instances and whole infrastructures up in no time and without problems. That, of course, turned out to be an illusion. Server provisioning and containerization are incredibly powerful technologies contributing hugely to the quality of web services and software, but they’re not a replacement for a sysadmin. Quite the contrary actually, in order to build quality containers with a stable and robust provisioning infrastructure, you need, you guessed it, a good system administrator. 

PaaS to the rescue?

So, with the sobering realization that Docker and Ansible weren’t going to solve our problem, our attention was drawn to another relatively new phenomenon, the PaaS. Platforms which promise to do a lot of the sysadmin for you by providing preconfigured managed container systems for deploying applications. This was exactly what we and many others needed. So we started looking into these services, specifically those targeting the modern PHP ecosystem like PhpFog, Pagoda Box, Fortrabbit, etc.

We tested, observed and evaluated. A lot of times we thought we’d found a satisfying solution with one of the providers. Always something ruined the fun. Instability, lack of flexibility, no writable folders, still in beta, too expensive, you name it. We found a quantum of solace in the fact that others, including prominent members of the PHP community like Phil Sturgeon, felt the same pain. We concluded that it was too early for the PaaS and went into observation mode. Then came along.

PaaS 2.0

Checking them out was more or less routine along the lines of „Oh, yet another new PHP PaaS product, let’s go see how THEY screwed up.“. The promises on the website looked similar to what other providers say, but somewhat more assertive. Who doesn’t like to hear this?

High-availability PHP cloud hosting platform that is fast and simple to use. Stop reinventing DevOps and wasting time on tedious admin chores.

At first I was taken aback by the strong Symfony/Drupal orientation, but reading some of the documentation it all just sounded too good to give up on it already. It seemed like many of the problems of the competition had been solved. I started to get the feeling that might be just what we had been looking for and decided to give it a serious try. The result: Minds blown. We realized that had taken the PHP PaaS idea to a whole new level, hopefully spearheading a new generation of PaaS.

A few months later we’re using for all our new projects and are migrating older projects over there too. Phil Sturgeon is right, once you’ve tried a hover-car, you just don’t want to drive a normal car anymore.

What we love about our new deployment and hosting solution

So let me introduce a few of the things we’re most thrilled about when working with

Literally 0 sysadmin

We’re completely freed of any kind of sysadmin work but we still have all the control we need over our servers. As with most of the PaaS solutions, everything is configured in a file that belongs to your project. With this file is called Here’s an example:

name: „example app“
type: php:7.0
    flavor: composer
timezone: Europe/Zurich
    database: "mysql:mysql"
    redis: "rediscache:redis"
disk: 2048
    "/temp": "shared:files/temp"
    "/sessions": "shared:files/sessions"
      sass: "3.4.17"
      gulp: "3.9.0"
      bower: "1.7.1"
        - redis
    build: |
        set -e
        vendor/bin/phing deploy-db-migrations
        npm install
        bower update
    deploy: |
        set -e
        vendor/bin/phinx migrate --configuration phinx-platform.php
      spec: "0 2 * * *"
      cmd: "cd /app/httpdocs ; php index.php cron offsite-backup"
  document_root: "/httpdocs"
  passthru: "/index.php"
      # CSS and Javascript.
      - \.css$
      - \.js$

      # image/* types.
      - \.gif$
      - \.jpe?g$
      - \.png$

The guys at take care of running high quality containers for all recent versions of PHP as well as HHVM. We just indicate what php extensions, ruby gems or npm packages we need and that's it. As you can see we can also do a lot of other stuff like mounting writable folders, running scripts during build or deployment, setting up cron jobs, or whitelisting files for public access. No need to think about sysadmin at any point.

Plus of course: All of this is under version control, so you'll know the exact server state at every revision of your software, how cool is that.

Push to deploy with 0 downtime deployment

On the master branch of your git repository is the live site. Whenever you have an update to your application run git push platform master and the platform will attempt to build and deploy your project. If anything goes wrong during the build, the app will not be deployed. During deployment all requests to your app will be buffered. This means 0 downtime deployments in any case. If the app can be successfully deployed, the buffered requests will be resumed against the updated app, if not, they will be resumed against the status quo.

Git branch = fully independent app instance

This is one of the most awesome features. You can push any branch of your app to platform and you'll instantly get a completely independent instance of your whole app with all it's containers (PHP, DB, Redis, ...).

Imagine you have an older PHP 5.5 app and you want to run it on PHP 7.0 to see what happens. With this is mindblowingly easy. All you need to do is:

  • make a dev branch of your repository, e.g. php-7-dev.
  • change type: php:5.5 to type: php:7.0 in your
  • commit and push the branch to platform.

Here you go, you'll have an instance of your app with an web URL and read only shell access running on PHP 7.0.

What's more, you can branch and merge directly in the GUI if you wish to.

No add-ons needed

If you know your way around Heroku you're familiar with add-ons. Heroku's approach is distributed, using a marketplace of services, while foresees combining all required elements within a single testable and consistent environment. This means provides a growing number of services like MySQL, PostgreSQL, Redis, MongoDB, Elastic search, and more out of the box, ready and optimally integrated, running in separate containers. *Batteries included*, as the guys at like to call it.

Obviously, when you create a new branch (= instance of your app) all service containers you're using will be cloned as well.

What about Heroku?

Heroku is one of the pioneers if not the mother of mainstream PaaS and they surely know what they're doing. They only added dedicated support for PHP in 2014 but were still the only serious alternative to for us. We're convinced we could have arrived at a satisfying solution using Heroku as well. For us, wins. It provides storage and data management by means of its "batteries included" approach, on a level that Heroku can't. While technical skills are required to set up we think their approach is simpler, as well as being more consistent and robust.


Additionally to all the described advantages of using a good PaaS like, this migration has forced us to significantly advance and fully automatize our build and deployment. We're thrilled with the monumental improvement of our deployment and hosting quality, the productivity boost as well as the peace of mind this new approach is giving us.

Veröffentlicht von am in CS Science
Case study- scientific translations


Translation is a hard problem.

There is nothing that a person could know, or feel, or dream, that could not be crucial for getting a good translation of some text or other. To be a translator, therefore, one cannot just have some parts of humanity; one must be a complete human being.

-- Martin Kay

In social scientific research the translation problem is even more pronounced. Traditional questionnaire research is in itself affected by the problem that language is not exact and the degree to which different people have a similar understanding of a specific concept varies. When gathering data in several language contexts with the aim of treating the dataset as one, non-regarding the language, exact translation is needed. Exact translation however, is an oxymoron. Researchers keep pointing out how different the understanding of emotions, personality concepts, and other social constructs are between cultures. 

This is a problem all cross-cultural questionnaire research has to tackle as best as possible. In March we posted about such a project, the Sustainable Workforce study for which we built the study software. In addition we were also tasked with coordinating the scientific translation of the questionnaires into 7 languages. 

To make this whole matter even more complicated, translating an online questionnaire also means translating parts of a software. Software translation in itself is yet another complex problem. 



In most cases the obvious choice for translations are professional translators. They have studied the complexity of their trade and have experience in how to maintain meaning despite contextual differences. There are of course also specialists in various professions and sciences such as medical translators, legal translators or technical translators.  

But what about translating psychological constructs? Can we trust people without social scientific background to translate these? Doesn’t it take a certain degree of knowledge about how measurement scales are built and validated? With scientific translations it is often necessary to have everything translated back into the original language and then evaluate the differences and adjust the translation accordingly. 

Regardless of the answers to the previous questions, what if the budget for the study doesn’t allow for paying translations and maybe back translations made by professionals? Master or Doctoral students in psychology or sociology with the respective mother tongue can be less expensive than professional translators and they have the right background. What they may lack is actual expertise in translation.


The solution we offered in the past as well as for this project was based on our extensive network of social scientists across Europe. For cost reasons and because the project was already under time pressure when we came on board, we suggested to replace back translations with proof reading by a second translator. This seemed justified on the background that the surveys largely consisted of questions of demographic nature rather than actual psychodiagnostics. 

For each language we recruited two people from our network, all of them with either a Master’s Degree or a PhD in Psychology. The only exception was Portuguese, where one of the two was a professional translator. 

images.jpgWe then prepared the materials, briefed the translators and coordinated work between the translators and the proof readers as well as the continuous updates coming from the project side. We also gave translators access to the system where they could verify their own translations in the final context they would be shown in. The result of this whole process was then to be verified by people of the respective languages within the research project.


Managing translations split into many small parts due to being used in software and at the same time dealing with constant increments and small changes is a very challenging task. Additionally, being at the interface between all parties without actually being able to judge the quality of neither the translators’ work nor the validity or accuracy of the customer’s feedback is uncomfortable to say the least.

At a point in the project the state of some translations didn’t meet the expectations of the research team at all. A small project crisis ensued which was then collaboratively solved. The interesting thing about it all is the analysis of the factors that may have contributed to the situation:

  • Translations are hard, scientific software translations are very hard.
  • There is more than one possible solution for any sufficiently complex translation problem, the perfect translation often doesn’t exist.
  • Due to the tight schedule of the whole project, the texts already in translation were still in change.
  • It is easy to underestimated the verification and testing effort such translations work takes from the side of the customer. In this case, it was heavily underestimated. In other words, the implicit customer expectation was that the delivered translations would be as good as final. Our implicit assumption was that only the customer would be able to finalize the product in cycles of acceptance testing. Such different assumptions unavoidably lead to misunderstandings.
  • Involvement of the project team in the translation process is crucial for two reasons. For one, only the creators of a questionnaire can verify its translation. Secondly, involvement creates commitment to and satisfaction with the created solution. This isn’t a trivial point, given the fact that many correct translations of a given text exist.
  • A degree in psychology doesn’t guarantee good translation outcomes due to a potential lack of experience and expertise.
  • A degree in translation doesn’t guarantee an outcome that is acceptable for researchers but it should guarantee linguistic correctness.

How to do it better

Despite the fact that after all the additional rounds of reviews and corrections, the study went online as planned and no serious problems with the questionnaires were reported since, we definitely think that there are a few things that could be improved for future projects of a similar nature.

  • Clarify expectations very carefully. True for every project.
  • Don’t start the translation process before the source is absolutely final. This means that the online questionnaire has to be adequately tested and language changes implemented, before the translation process starts.
  • Use professional translators for the translation, use scientists for proof-reading. 
  • If possible, the actual research team should do the back translation or proof-reading.
  • At the very least, the final editing and quality review can not be outsourced and takes a significant amount of time. This has to be taken into account when planning the project.
  • Use a software environment were translations can be made in context from the start.
  • Use some kind of change tracking if possible.
Fallstudie – 60 Jahre kantonale Schweizer Wahldaten aus Papierarchiven befreien


In einem Land wie der Schweiz, wo seit Jahrhunderten auf allen Verwaltungsebenen mit Herzblut Bürokratie betrieben wird, liegen systematisch gesammelte Daten aller Art brach. Daten, welche für die Wissenschaft von einzigartiger Bedeutung sein können, da vergleichbare Informationen in den wenigstens Regionen der Welt so akribisch gesammelt wurden. 


Wir Psychologen sind an solchen Daten bisher wenig interessiert. Für Ökonomen und Politwissenschaftler sind diese aber Gold wert. Der Haken an der Sache ist nur, dass der ganz grosse Teil davon vor dem Computer-Zeitalter gesammelt wurde und daher nur auf Papier gespeichert ist. 

Wie schafft man sich eine qualitativ einwandfreie digitale Datenbasis auf Grund von Daten, die in grossen Büchern in Archiven liegen und darüber hinaus in jedem Kanton der Schweiz anders aussehen? 
Genau dieser Herausforderung stellten sich Prof. Dr. Mark Schelker und Dr. Lukas Schmid, deren Ziel es war, die Resultate der kantonalen Parlamentswahlen der letzten 60 Jahre zu digitalisieren. Cloud solutions konnte die Forscher bei der Entwicklung und Umsetzung einer optimalen technischen Lösung unterstützen. 


Selbstverständlich ist die automatische Texterkennung (OCR) relativ weit fortgeschritten. Für die Herausforderung der kantonalen Wahldaten kam OCR aber aus verschiedenen Gründen nicht in Frage: 

  • Beim Scannen von dicken Büchern werden die Inhalte in Bundnähe oft verzerrt, bleicher oder sogar leicht abgeschnitten. OCR Softwares können damit nicht umgehen. 
  • Tabellen mit vielen Trennlinien sind ebenfalls ein Problem für OCR. 
  • Ältere Schriftstile haben eine schlechtere Erkennungsrate. 

Das Nachkorrigieren von schlechten OCR Daten wäre eine Möglichkeit gewesen. Dies wird aber schnell aufwändiger, als die direkte Eingabe der Daten aus einem einfachen Scan und führt mit hoher Wahrscheinlichkeit dazu, dass falsch erkannter Text als Fehler in die Datenmatrix gelangt. 
Es blieb also nur die manuelle Erfassung. Dazu würde man traditionellerweise wohl Excel benutzen, was aber verschiedene Probleme mit sich bringt:  

  • Arbeit dieser Art ist durch ihre repetitive Natur eher fehleranfällig, Excel bietet keine Unterstützung dabei, verschiedene Fehlerquellen wie Zeilenverschiebungen, falsche Eingaben, falsche Zuordnung, etc. zu vermeiden. 
  • Das manuelle Zusammenführen vieler einzelner Excel Dateien stellt eine weitere Fehlerquelle dar. 
  • Bei vielen auf mehrere Erfasser verteilten Excel-Dateien ist keine laufende Kontrolle über den Stand der Erfassung und die Qualität der Daten möglich.

Umgesetzte Lösung


Die mit dem Kunden in gemeinsamer Denkarbeit entwickelte und durch CS programmierte Lösung hatte zum Ziel, die jeweiligen Stärken von Technik und Mensch zu vereinen, um so die Qualität der Daten zu maximieren. Das entwickelte System hatte folgende Merkmale: 

  • Klar strukturierte, software-geführte Erfassung der Daten. 
  • Vermeidung von redundanter Erfassung durch Auftrennen in mehrere Erfassungsebenen (Kanton, Bezirkswahljahr, Kandidaten). 
  • Gewisse vorerfasste Daten, die bereits korrekt zur Auswahl gestellt werden konnten. 
  • Datenvalidierung bei Eingabe. 
  • Eingebaute Qualitätschecks (Vergleich von perfekten, vorerfassten Datensätzen mit den eingegebenen Daten). 
  • Sorgfältige Instruktion und Support der ErfasserInnen. 
  • Zusätzliche manuelle Stichprobenkontrollen durch das Forscherteam. 


Auf diese Weise wurden Ende 2014 / Anfang 2015 durch 30 ErfasserInnen in höchst zufrieden stellender Qualität an die 190'000 Kandidierende erfasst, verteilt auf 60 Jahre, 4000 Wahlbezirke und 15'000 Listen. 

The relevance of non-response rates in employee attitude surveys

The HR Department of any organization, or consulting company in the field of HR employ surveys as a methodological means of investigating how they can improve as an employer, increase performance and become more profitable. Often the focus lies on systematically analyzing the staff’s perception of working conditions, job attitudes, health and other performance related indicators. In order to best understand which aspects need to be ameliorated and to have a sound decision basis, high response rates are necessary. Unfortunately, the response rates have proven in many instances to be relatively low for surveys addressing the entire organization.

In one of our previous posts we focused on methods which could help to improve the response rate in employee surveys. Now let us focus on how central occupational performance indicators, such as job attitudes, might influence response rates in employee questionnaires. An interesting study by Fauth and his colleagues (2013) focuses on the relationship between employee work attitudes (e.g. job satisfaction or commitment) and non-response rates and how they influence each other. 


How non-response rates depict employees' job attitudes

Whilst previous research on this topic has mainly focused on the relationship between the individual working attitude of employees and their non-responsive behavior in surveys, Fauth and his colleagues took a different approach. They were interested in the effects of group-level work attitude on response rate. Although co-workers and work group members influence the attitude and perspective of other employees, the relationship between the job satisfaction of an entire working team or unit within an organization and their survey response behavior has previously been neglected (Cropazano & Mitchel, 2005). From a practical perspective, such knowledge is crucial, as the survey feedback processes in companies are almost always on aggregated levels (e.g. team, business unit) and not on the individual level. Thus, Fauth et al. (2013) addressed this need for group level-based analysis of non-response rates in organizational surveys. They hypothesize that aggregated job satisfaction is positively related to survey response rates at the work group level. 

As the social exchange theory (Cropazano & Mitchel, 2005) underlines, individuals are willing to invest more effort and energy when content. Addressing this idea in the work sphere shows that satisfied employees are willing to invest their energy in additional non-work related tasks, such as completing employee surveys. This form of Organizational Citizen Behavior (OCB; Rogelberg et al., 2003) explains the previously detected positive relationship between work satisfaction and response rate in employee attitude reviews on an individual level (Klein et al., 1994). 

In order to test whether employee happiness is also positively related to survey response rates at the work group level, Fauth et al. (2013) conducted two large-scale follow-up employee surveys in four distinct companies in 2002, 2004, and twice in 2006. The participating 1120 employees were gathered into 46 groups with approximately 24 employees per group. Their aggregated job satisfaction was assessed via a multi-item measure - the Job Descriptive Index, the results of which show that work groups with a greater combined job satisfaction had significantly higher response rates. Furthermore, the study also showed that independent of the effect of this contentment, smaller teams and teams with more heterogeneity in tenure and gender had a higher response rate. Intriguingly, no difference in response rate was found for blue collar versus white collar.

This points to an interesting avenue in organizational survey research: that not only the employees' answers to survey questions are relevant for organizations when assessing group perception of employment situation, but also their response rates. Specifically, higher response rates could indicate a greater general work satisfaction and be an interesting indirect indicator of the overall attitude of a working unit towards their job and their organization. 

Veröffentlicht von am in CS News
Tapas und Strategie

Es ist erstaunlich, wie viele Dinge eine virtuelle Firma, wie cloud solutions es ist, via Internet erledigen kann. Die Auswahl an ausgefeilten Kollaborationstools ist enorm und es lässt sich praktisch für jede Herausforderung eine Lösung finden. Ganz ehrlich gesagt, ist ein Videomeeting, bei dem man gemeinsam an einem Google Doc arbeitet wesentlich effizienter als ein live Meeting.

Ein Videomeeting hat aber natürlich auf sozialer und emotionaler Ebene nicht denselben Wert wie ein live Meeting. Daher treffen wir uns trotz aller Tools regelmässig in corpore und dies selbstverständlich jeweils an interessanten Orten. So auch gerade im April, wo sich die GL für vier Tage in Valencia zu einem Strategie-Meeting einfand.

Flamenco: Adrian und seine Sängerin und ein Hund

Da wir uns, wie gesagt, auch online über Strategie unterhalten können, aber online weder zusammen Tapas essen noch Adrian beim Flamenco spielen zuschauen können, heisst der Titel dieses Posts "Tapas und Strategie" und nicht umgekehrt. Trotzdem kam auch die Arbeit bei Weitem nicht zu kurz. Unsere Marketingstrategie nimmt immer klarere Formen an.

Im Juli heisst es dann ab nach Berlin, wo sich nicht nur die GL sondern ganz cloud solutions trifft und in einer Airbnb Wohnung hausen wird.

Veröffentlicht von am in CS Tech
Die PHP Revolution

PHP ist ursprünglich als Script-Sprache entstanden und wurde nicht als Programmier-Sprache entworfen. In der Folgezeit hatte sich PHP dann in die Richtung einer Programmiersprache weiterentwickelt, aber hatte in diesem Bereich lange Zeit einen schlechten Ruf, welcher teilweise immer noch anhält, z. T. auch begründet.

Dabei hat sich PHP in den letzten Jahren schrittweise revolutioniert und verbessert, so dass sich die Situation mittlerweile ganz anders darstellt. PHP hat seine Anfänge als ''lingua franca” für Script-Kiddies längst hinter sich gelassen und findet sich als ernstzunehmende Programmiersprache verstärkt wieder, so beispielsweise auch im Enterprise Bereich.

Oftmals als Teil des LAMP-Stacks (Linux, Apache, MySQL, PHP) spielt PHP eine dominierende Rolle im Internet. Ca. 80% aller Webseiten basieren auf PHP und grundsätzlich ist PHP die am meist benutzte Programmiersprache für Web-Seiten und Web-Applikationen.


Entwicklungsschritte der Sprache PHP

Einen wichtigen Meilenstein in dieser Entwicklung stellte 2004 die Einführung eines neuen Object-Models dar, welches mit der PHP Version 5.0 Einzug gehalten hat und einen grossen Schritt hinsichtlich OOP (Object Oriented Programming) bedeutete. Ausserdem wurde PHP signifikant modernisiert z.B. durch die Einführung von Namespaces, Anonymous functions und Closures (PHP 5.3) sowie Traits, einem integrierten Webserver (PHP 5.4) und Generators (PHP 5.5). Darüber hinaus hat sich die Performance von PHP beträchtlich verbessert und für die nächste Version PHP 7 ist nochmal mit einem deutlichen Geschwindigkeitsschub zu rechnen. Zudem wird PHP 7 lange vermisste Konzepte wie Scalar Type Hints und Return Types einführen.


Git und Composer

Git ist das am weit verbreitetste Source-Code Management Tool, welches von einer Vielzahl von PHP Projekten sowie auch von PHP selbst verwendet wird. Dazu haben sich mit GitHub und ähnlichen Diensten einfache und beliebte Plattformen etabliert, dank denen Teams gemeinsam an Software arbeiten können.

Composer ist der lange überfällige Packet-Manager für PHP und ist für PHP, was Gem für Ruby oder npm für Node sind. Das funktionale Zusammenspiel von Git und Composer bildet eine sehr praktische, flexible und mächtige Grundlage für eine fortschrittliche Software-Entwicklung mit PHP. Dieses Zusammenspiel ist auch einer der wichtigsten Gründe, für die drastisch steigende Qualität vieler PHP Frameworks und Libraries in letzter Zeit und führt zum neuen aufblühen des PHP Ecosystems.


PHP Ecosystem und Community

Einzelne PHP Frameworks, Libraries oder Applikationen sind heutzutage näher zusammengerückt, während es früher eher isolierte Inseln waren. Diese verschiedenen Projekte profitieren nun oftmals voneinander, ergänzen sich und es findet eine enorme Verbesserung der Software-Qualität und Anpassung an Software-Standards (wie PSR der PHP Framework Interop Group) statt.

So können neue Projekte entstehen, die auf ein bestehendes Framework aufbauen und zusätzlich benötigte Libraries, welche sich bereits bewährt haben, können automatisch mit Hilfe von Composer eingebunden werden. Dadurch werden auch die einzelnen Projekte gestärkt, welche eine grössere Verbreitung finden, was nicht zuletzt der PHP Community im Allgemeinen als auch den Communities der einzelnen Projekte einen Aufschwung gibt. Diese sind sich wiederum gegenseitig von Nutzen. Insgesamt gibt es eine grosse und dynamische Community, welche etwa die League of Extraordinary Packages hervorgebracht hat, die sich dem Sammeln von guten und modernen PHP Libraries verschrieben hat.

Im PHP Umfeld finden sich auch eine Menge von prominenten Projekten wieder. So basieren beispielsweise die drei am häufigsten eingesetzten CMS (Content Management Systeme), WordPress, Joomla und Drupal alle auf PHP. Des Weiteren gibt es eine grosse Auswahl an Frameworks wie Zend Framework, Symfony oder Laravel, welche dazu beigetragen haben, dass PHP vermehrt im Enterprise Bereich eingesetzt wird. Einige bekannte Firmen wie Facebook, Google und Yahoo setzen PHP erfolgreich ein.


PHP hat die letzten Jahre eine revolutionäre Entwicklung durchgemacht und das gesamte Umfeld hat sich verändert. Die angebrochene neue Ära bietet weiterhin grosses Potential für PHP und dessen fortwährende Verbesserungen. Mit PHP kann qualitativ hochwertige Software erstellt werden und viele Probleme der Webentwicklung können auf elegante Weise gelöst werden.

In unserem Jahreseröffnungspost vom Januar haben wir bereits eins unseren neuen Projekte erwähnt, Sustainable Workforce.


Sustainable Workforce ist ein einzigartiges wissenschaftliches Forschungsprojekt über betriebliche Investitionen in Human- und Sozialkapital, Work-Life Balance Management, Arbeitsflexibilität, langfristige Beschäftigungsfähigkeit älterer Mitarbeiter und Flexicurity in neun europäischen Ländern. Mit online Befragungen werden Längschnittdaten (zwei Wellen im Abstand von einem Jahr) von über hundert Organisationen, hunderten Vorgesetzten sowie zehntausenden Mitarbeitenden in Bulgarien, Finland, Deutschland, Ungarn, den Niederlanden, Portugal, Spanien, Schweden und England gesammelt. Neben den drei Hauptfragebogen mit relativ komplexen Routingabläufen, welche auch als PDF zum offline ausfüllen zu Verfügung gestellt werden müssen, gilt es zudem eine Reihe von Vignetten-Experimenten mit mehrdimensionaler Randomisierung zu entwickeln.

Der Multilevel-Charakter des Designs sowie die Darbietung von Befragungen dieser Komplexität in neun Sprachen, bietet willkommene Ausbaumöglichkeiten für unser Framework SurveyLab. Wir sind stolz darauf, für die Umsetzung dieses Projekts das Rennen gemacht zu haben und freuen uns, einem weiteren grossen EU-Forschungsprojekt als technischer Research Support Partner zu Verfügung stehen zu können!

From Excel to Javascript - Implementing Spreadsheets in the Browser

Why we moved our spreadsheet to the web

At CS we use spreadsheets to break down paper questionnaires into logical tabular entities, such as intro texts, items, skip conditions, pages and many more. The resulting relational data can then be imported into a relational database and used in SurveyLab projects. SurveyLab is our RAD framework for building complex online survey systems. For a number of years we used Excel to accomplish this task, however for reasons that will be detailed below, we found this not to be the optimal choice.


If you have ever had the need to collaborate on an Excel file, you know how problematic this can become. There is virtually no way of concurrently editing an Excel file, meaning time lost, as the various collaborators add their changes in series. At times you end up with several versions of the same file, only to realize that integrating these is near impossible or at least very error-prone. But wait - isn’t that why we have version control systems? Supposedly yes, however Excel uses binary file formats and therefore to diff and merge conflicting changes becomes infinitely more difficult.

Data portability and availability

If you want to move your spreadsheet data to a database or need to serve other applications with it, you must export it into appropriate formats and have these apps import the data again. If you still want to be able to change this data easily via spreadsheet editor, you have to repeat this process every time changes are made. Clearly, there are better, more centralized ways of handling this.

Limited functionality

Excel obviously has many built-in functions. Yet, what to do when you need to implement your very own specialised validator? How about if you have to check for occurrences of foreign keys on another sheet? You can attempt this in Excel, but as you’ll find, it’s messy and hard to maintain.

Getting rid of technical debt

Realizing the technical debt we were accumulating with our Excel solution, we started looking into a way of dealing with the above problems. This initiative was also given wings by the fact that we started working on an advanced GUI for editing surveys. Since this advanced GUI will employ the exact same data that is manipulated via spreadsheet, we needed an accessible, extensible and portable format. Such problems can easily be solved by using a standardized data format like JSON and be automatized via an API. We therefore, decided on building our own browser-based spreadsheet editor. As we did not want to succumb to the “not invented here” syndrome, we had a look at available open source and commercial spreadsheet solutions based on Javascript.

spreadJS by Wijmo

The Excel-like demo by Wijmo seemed very promising, so we committed some time to evaluating it. It has almost all the functions of Excel and provides an API for customization. However, in addition to being too expensive, there were other problems we discovered.


  • It can basically replicate full Excel functionality 
  • It is in ongoing development
  • Good support and fast forum responses by the community and Wijmo


  • Expensive ( ~800€ )
  • Requires Excel I/O service or client side software for some functionality like creating a basetemplate from an Excel file
  • Doubtful code quality, e.g. CSS styles inside the HTML and Javascript code, which makes customizing difficult. Large overhead and storage needs because of the way meta information is handled. Our base template with few default data converted to a spreadJS JSON file had about 4-5 million characters. Our plan was to use localStorage for storing data while editing, which is limited to only a few MB (depending on the browser) and would not be sufficient to store several projects.

Google Docs Spreadsheets

We also looked into using Google Spreadsheets, but that would have meant building a spreadsheet editor from scratch, due to the fact that we cannot implement our own functionality. So we could “just” have used the Google Spreadsheet API to store and read data, however, we wanted to host the editor ourselves, as our projects tend to contain sensitive data. Moreover, we like to stay independent.

jQuery plugins

We looked at some of the many smaller jQuery spreadsheet plugins that are out there, like flexigrid, jQuery.sheet and some others. Yet, most of them didn’t fulfil our prerequisites, which included not being in active development or having bad documentation, or being too limited in scope. 

Our choice: handsontable

We decided to go with handsontable (hot), as it is cleanly and elegantly written, readily extensible, well-documented and updates are frequently released. The one troublesome thing is the large number of open issues on github, which may indicate a lack of contributors. So far we have been able to build clean workarounds for every problem we have encountered.

Reasons why we chose handsontable:

  • Easily extensible
  • Two way data binding: If the text inside a cell is changed, the data-object gets updated right away. It works the other way too - if new data is pushed to the editor the change is reflected in the UI after a re-render.
  • Good documentation and active development 
  • Good API: hot supports a ton of functions you can call, and in addition defines a lot of hooks (events) which makes customizing the editor much easier. E.g. it has events, which are fired before and after data is changed inside a cell and also for failed validations.
  • It is fast: except for the scrolling performance, which can be slow at times, editing, validation, importing and exporting work rapidly.

  • Small overhead for project files: the JSON data hot exports is pretty straightforward, with one array per table, which contains the rows represented by objects. This made it very easy to adjust our Surveylab importer to allow us to read those files.
  • Custom cell types: hot comes with a lot of predefined cell types, such as dropdowns, numeric, text only etc.
  • Cell validators: hot allows us to write custom cell validators. That was an important factor in the decision to use this program, as we needed to implement a foreign key check that would look for the existence of a key in other table instances.
  • Context menu: hot has an build in context menu on right mouse click, which can be extended and customized with your own functionality

So far we are fairly happy with the decision to move to an online spreadsheet editor. By far and away, the biggest improvement is in the ability to use git for our project templates, as we have now switched from Excel files to JSON and can merge changes without problems. Working with the editor is also more user-friendly, because we can highlight tables with failed validations or use hyperlinks to navigate directly from a foreign key to the corresponding table. In addition, the editor is future proof, because it is written in Javascript. If we need a new function or require a part of the UI to behave differently, we can now just build it.

Below you see an image of the current version of the editor. Red buttons indicate sheets with failed validations.





Veröffentlicht von am in CS Science
The Open Access Movement

If state of the art scientific knowledge was shared not only within a restricted community, but available for everybody, the idea of free education for all could reach an entire new level.  Before being shared digitally online, most scientific findings only circulated in academic spheres, mainly in printed journal versions. Unfortunately, accessing one journal article can be quite a costly endeavour and can thus not be afforded by everybody. Consequently, the state of the art scientific knowledge was mainly limited to a selected and specific community whose universities possessed paid licenses to online journals until recently. Although many research projects are funded through national taxation systems, their findings are mainly shared in licensed academic journals. Therefore, the majority of individuals from developed and developing countries could not access the scientific state of the art knowledge. This circumstance and the development of widespread internet access worldwide in the late 20th, early 21st century accelerated the so called Open Access movement. 

What is Open Access (OA)?

The Open Access movement (OA) defines the term Open Access as unrestricted online access to peer-reviewed scientific and scholarly research (Schöpfel & Prost, 2013). Not only scientific papers published in journals are now OA but also a growing number of book chapters, monographs and thesis papers have followed the idea (Schöpfel & Prost, 2013). To enable easier access of scientific findings to a wider community of researchers, academics and libraries, two main declarations represent milestones in the history of OA: The Berlin Declaration and the Budapest Declaration of Open Access. By the Open Access Week in 2013, the Berlin declaration had been signed by 451 organizations.  


Development of the Open Access Movement

The debate surrounding the effect of OA started with Steve Lawrence’s publication on "Free online availability substantially increases a paper's impact" in Nature (Lawrence, 2001) and has been controversially discussed ever since. Lawrence (2001) emphasized that OA could facilitate the connections among research groups and scientists, expand scientific networks, and maximize the visibility of scientific findings through web engine indexing. This could create a significant contribution to society, minimize redundancy and speed up the scientific process, rendering it even more transparent for the public (Lawrence, 2001).
A growing number of voluntary organizations such as the Right to Research Cooperation and other research entities such as the Frauenhofer Institute, support Lawrence’s idea (2001) that an OA article is more likely to be read and cited than an article to which the online access is limited.
Historically, the spreading of public Internet access worldwide in the 1990s and early 2000s fueled the OA movement such as the two methods of free online retrieval, namely Green and Gold OA, enabled easier access to state of the art research.

The two routes of Open Access: Green OA and Gold OA

As stated above, Open Access to scientific findings can be guaranteed and assured through two specific routes: the Green Open Access route and the Gold Open Access route. Green OA stands for the procedure in which the author publishes in any suitable journal and then self-archives this published journal article for free either on his personal website, the affiliated institutions' website, or shares it with the respective scientific community via online platforms  as ResearchGate and PubMed Central. The Golden route represents OA's gold standard with authors publishing in peer-reviewed open access journals. Such hybrid OA journals provide Gold Open Access only for the articles the author has paid a publishing fee for. Direct Gold Open Access journal publishing has been growing rapidly between the years 2000 and 2009. It was estimated that around 19’500 articles were published OA in 2000, increasing to a total publication number of 191’850 articles in 2009. All those findings underline that journals guaranteeing OA have largely increased in numbers.


Advantages & disandavantages of Open Access

Accessing scientific results online represents a major advantage in contrast to paid online and print journals, as OA enables a worldwide audience to receive state of the art scientific knowledge of the respective discipline and can thus be immediately assessed. This facilitates the dispersion of scientific knowledge not only among academics of the respective field but also among the non-academic population. Thus, knowledge and scientific findings become available for everybody. Moreover, companies and professionals of one field of expertise can update themselves about the newest level of expertise in the respective field.
Nevertheless, despite the previously reported advantages, OA also incorporates several downsides: Whereas for paid access, only a selected group of readers could access the respective journal article, journal articles published in OA journals might underlie a selection bias of the submitting entities. As the costs for an article submission have to be covered by the publishing researcher or his/her funding entities, this might cause a selection bias in the sense that only those researchers able to pay the publishing fee might successfully submit their research. Moreover, current OA journals mostly have a smaller impact factor, making publishing articles in these journals somewhat less attractive for researchers who depend on high impact publications.


The idea of OA to scientific knowledge is not absolutely new but revolutionary in itself, as most academic research mainly has been circulating solely in the academic world. On the one hand the OA movement can potentially change this and may even fuel scientific development. Chances are that more journals will share their articles free online in the future. The main question remains how OA journals will resolve the obstacle of a publication fee for publishing entities and succeed in becoming high impact journals.


Björk, B. C.; Welling, P.; Laakso, M.; Majlender, P.; Hedlund, T.; Guðnason, G. N. (2010). Open Access to the Scientific Journal Literature: Situation 2009". PLoS ONE 5. doi:10.1371/journal.pone.0011273. PMC 2890572. PMID 20585653.

Lawrence, S. (2001). Free online availability substantially increases a paper's impact. Nature, 411(6837), 521-521.

Schöpfel, J.; Prost, H(2013). Degrees of secrecy in an open environment. The case of electronic theses and dissertations. ESSACHESS - Journal for Communication Studies 6. ISSN 1775-352X.



Veröffentlicht von am in CS News
Wir sind Feuer und Flamme. Unser 2014 und die Zukunft.

CS ist zufrieden

2014 war für uns ein sehr erfreuliches Jahr. Mit 14 erfolgreich abgeschlossenen Projekten sowie der Umsetzung einer Batterie von über einem Duzend Testverfahren aus der Laufbahnberatung konnten wir unseren Jahresumsatz wiederum steigern.

CS schafft Qualität

Viel wichtiger jedoch: Dank unserem wissenschaftlichen Hintergrund, unserer Energie Kundenbedürfnisse auf den Grund zu gehen und dem Willen, mit der Entwicklung der Technologie Schritt zu halten, konnten wir unsere Servicequalität im 2014 nochmals wesentlich steigern. Die vielen positiven Rückmeldungen unserer Kunden bestätigen uns, wir sind auf dem richtigen Kurs.

CS ist ein Unternehmen der Zukunft.

Wir setzen voll und ganz auf eine flache Hierarchie, Empowerment und das Teilen von Information. Wir bieten maximale Flexibilität was sowohl Arbeitsort wie auch Arbeitszeiten angeht. Wir haben die E-Mail für unsere interne Kommunikation praktisch vollständig abgeschafft und setzen auf moderne, flexible Cloud Technologien für unsere technische Infrastruktur sowie interne Tools wie CRM oder Task Management. So macht arbeiten Spass!

Diese schlanke, moderne Unternehmensform kombiniert mit unserer Servicequalität macht uns nicht nur in der Schweiz zum optimalen Partner, wenn es um die Umsetzung von online Befragungssystemen geht!

CS ist motiviert

Im 2015 stehen bereits weitere interessante Projekte zur Umsetzung an. Darunter eine gross angelegte soziologische Längsschnittstudie mit ca. 30'000 Teilnehmenden in 30 Firmen aus 9 EU Ländern. Für diesen Auftrag können wir neben der Softwareentwicklung auch ein weiteres Mal von unserem europaweiten Netzwerk aus Studierenden und Wissenschaftlern profitieren und die Übersetzung der Fragebogen in 7 Sprachen anbieten.

Parallel zum Tagesgeschäft arbeiten wir zudem emsig an einer grafischen Benutzeroberfläche für unser Framework SurveyLab, um damit das Erfassen von Fragebogen in naher Zukunft wesentlich zu vereinfachen. Wir werden Euch auf dem Laufenden halten.

Wir sind mit Feuer und Flamme bei der Sache. Schönes 2015! :-)

Veröffentlicht von am in CS News


After introducing all other team members, be curious to get to know Felix, frontend developer at cloud solutions. Felix is 30 years old, comes from Stuttgart but lives in Berlin, as he loves the city's diversity.

How and when was your first contact with cloud solutions? How and why did you start working for cloud solutions?

First contact was at the beginning of 2014 via stackoverflow careers, when I replied to a part-time job announcement from cloud solutions. Some days later, we had our first Skype interview, which was very good. We had a nice talk and a lot of fun. That was one main reason why I started working for cloud solutions, because I really like my colleagues. And the job allows me to use recent best practices and to try out new tools and techniques. In addition, the part-time job helps me to stay flexible and allows me to work on other projects as well.

What was your most interesting project you have been working on in your time with cloud solutions?

The most interesting project I have been working on, is the (currently ongoing) development of the Surveylab web service, as this is a huge project in which I can really learn a lot: From planning and prototyping to deciding which techniques and frameworks to use or how the database connection to the current Surveylab project works. But this also means that things have to be re-thought more than one time, and sometimes you have to start all over again - it never gets boring.

Why has 2014 been such a successful year for you?

2014 has been such a successful year for me, because I finished my Bachelors' degree in computer science, started being self-employed, started working for cloud solutions and moved to Berlin.

Were there many new challenges you faced in your working life this past year?

The Bootstrap 3 refactoring was definitively underestimated in time and complexity. Additionally, one of my goals has been to learn writing tests in javascript, which has proven to be quiet demanding.

What projects are you engaged into in your leisure time?

I am developing a responsive, javascript only lightbox and I write blog posts from time to time.

What are your personal goals for the year 2015?

The usual: do more sports and stop smoking (again :D).

Veröffentlicht von am in CS News


Tobias is mainly responsible for backend development at cloud solutions. He is 33 years old, comes from south Germany and currently lives in France. He joined cloud solutions in 2012 and has been engaged in many interesting projects so far.

How and when was your first contact with cloud solutions? How and why did you start working for cloud solutions?

In December 2012 I bumped into a job vacancy by cloud solutions. At this time I was just in a transition period and was looking for a job that I could do from where I live in France and which allows me to continue to work a few hours per week for another job for a non-profit organization. From the first skype call that followed after I applied for the job until now, two years later, I'm very happy that I met the guys of cloud solutions. It has always been positive and I can say I enjoy working for CS as a (backend) software developer.

What was your most interesting project you have been working on in your time with cloud solutions?

There were other interesting projects too, but the most relevant and interesting one was working on the online test platform project (OTP). This is mainly because it was the perfect opportunity in my second year working for cloud solutions to focus on projects that are based on our in-house framework 'SurveyLab'. Being part of the software development team that implemented these OTP projects I could learn a lot and become much more familiar with the software.

Why has 2014 been such a successful year for you?

My employment rate was increased in 2014 to almost twice as much as I worked in 2013. So, two main successful aspects were working in my second year for cloud solutions and working more hours. Since I consider those steps as being successful, 2014 was a successful year for me. Especially since I was involved in interesting projects, as I previously mentioned, and since the mix between learning something new and productively contributing was conducive for this to happened smoothly.

Were there many new challenges you faced in your working life this past year?

Not more then already mentioned above. Definitively not in terms of difficulties, as from my point of view all was happening step by step in a good flow.

What projects are you engaged into in your leisure time?

I do not really have projects going on currently. I like to do sports, which I have not done regular recently.

What are your personal goals for the year 2015?

One general goal is to have a good balance between the various aspects of everyday life.



Veröffentlicht von am in CS News


Sina is responsible for cloud solutions' marketing. She is 26 years old and currently lives in Münster, Germany with Matthias. She recently finished her master's degree in Psychology at the University of Münster.

How and when was your first contact with cloud solutions? How and why did you start working for cloud solutions?

My first contact with cloud solutions was at the EFPSA Congress 2011 in Poland (EFPSA = European Federation of Psychology Students Associations). It was my first EFPSA Congress and besides being fascinated by all the enthusiastic and motivated psychology students, I heard people talking about cloud solutions saying that they were EFPSA alumni and have built up a company creating online software. Therefore, I collected one of their business cards. It was almost two years later in January 2013 that I learned they were looking for a German speaking psychology student to intern with cloud solutions. I was interested, applied, talked to Markus about the tasks I could fulfill for the team and was happy to hear that I could be working with them in February 2013. After finishing my internship I started working for cloud solutions part time which I have been doing throughout my studies and will continue to do so in the future.

What was your most interesting project you have been working on in your time with cloud solutions?

So far, there were many projects I liked. But I really enjoyed working for the project called “Digitalisierung Schweizer Wahldaten”. I particularly liked to select students working for our project. As an organizational psychology major, we learned a lot about personal selection in theory, so that I really enjoyed practically using this knowledge to select adequate candidates for the project. Moreover, I started being responsible for the financial administration of cloud solutions this year and really enjoy it so far.

Why has 2014 been such a successful year for you?

I definitively have to admit that 2014 was a very successful year for me. I managed to participate in the National Model United Nations New York, a simulation of the United Nations, travelled a lot, spent a wonderful six weeks internship with many other highly motivated psychology students in Cambridge in the UK this summer and managed to finish my master studies in psychology, as I recently handed in my thesis.

Were there many new challenges you faced in your working life this year?

Apart from working for the before mentioned “Digitalisierungsprojekt” and learning some new skills, I learned a lot about financing this year. But the biggest challenge for me this year was to finish my master studies and hand in my master thesis which was a lot of hard work. But I am very excited for the challenges that next year has to offer.

What projects are you engaged into in your leisure time?

As I consider it very important to obtain a lot of knowledge in different professional fields, I have engaged in a lot voluntary work during my studies. This year, I prepared with a group of students for the National Model of United Nations in New York to present the interests of Macedonia during this simluation at the United Nations headquarters. Moreover, apart from that I interned with many other psychology students from all over Europe in the before mentioned six week internship programme called the Junior Researchers Programme (JRP) in Cambridge this summer to create and publish guidelines and an economic and legal framework for international medical travel. This was a very intense and incredible academic and personal experience as well. But my best day this year was when I finally handed in my master thesis after a year of work and thus finished my studies in psychology. This was a very exciting day!

What are your personal goals for the next year?

During my studies, I have always thought in terms of personal goals, because there were many things that I wanted to do and to experience. I think next year will be different, so my first goal would be to enjoy life as it comes and live it every day to the fullest. Other than that, I consider to start a PhD in psychology or economics and travel as much as possible. But the most important goal/wish is for me, my family and friends to stay healthy in 2015.

Veröffentlicht von am in CS News


After getting to know Markus and Adrian, meet the third owner of cloud solutions, Sven Gross. Sven has recently finished his PhD in Organizational Health Psychology doing justice to his position as Science Geek at cloud solutions. He lives in Bern, Switzerland with Miriam. In September, his first son was born. Since then, he coordinates his family life, cloud solutions and his position as project manager for the SBB.

How and when was your first contact with cloud solutions? How and why did you start working for cloud solutions?

During my PhD time at the University of Bern, I was involved in the scientific backing of the S-Tool project. This project was the kick-start for Markus to found mh cloud solutions. In the next couple of years, I worked now and then as a freelancer to do statistical analyses for Markus' customers. In 2011, I joined cloud solutions as a partner.

What was your most interesting project you have been working on in your time with cloud solutions?

The evaluation of the UNV program for cinfo.

Why has 2014 been such a successful year for you?

My son Juri was born healthy and happy in September of this year. To become a father is a miracle.

Were there many new challenges you faced in your working life this year?

No, not this year. At the moment the biggest challenge for me is to combine my new role as a father with my working roles as a partner of cloud solutions and as a project manager at the SBB.

What projects are you engaged into in your leisure time?

For a year, we have been proud tenants of an allotment garden. For me gardening is a very relaxing and inspiring activity. Besides gardening, I am looking forward to nice spring weather, so that I can test my new racing cycle.

What are your personal goals for the next year?

I do not make personal goals for a year. I will be happy when my family and me will be healthy and frisky in 2015.



Veröffentlicht von am in CS News


After we introduced cloud solutions' founder Markus, today you will get to know more about Adrian Imfeld, web software engineer and partner at cloud solutions. Adrian originally comes from Zürich in Switzerland but has moved to Valencia in Spain where he enjoys the warmer weather and is able to pursue his passion for flamenco music.

How and when was your first contact with cloud solutions? How and why did you start working for cloud solutions?

I met Markus at a psychology students' conference when we were still studying psychology. He was giving a talk on a web-platform about stress measurement he was developing for Gesundheitsförderung Schweiz (GFCH). Being both nerds, we immediately geeked out about software development and I said I’d be interested in working with him on future projects. Sure enough, he contacted me soon about working on a new web-project dealing with team psychology. I got quickly up to speed with the necessary technology and tools and that's how I became a part of cloud solutions.

What was your most interesting project you have been working on in your time with cloud solutions?

It's hard to pick a single favorite among so many interesting projects. For me, the most interesting task at cloud solutions is the design and development of our in-house survey framework called SurveyLab. I love dealing with complex problems and I'm passionate about coding. Our company provides a rare opportunity of combining programming skills and psychological methodology.
If I had to pick a favorite, I would choose our current online test platform project (OTP) with the “Schweizerisches Dienstleistungszentrum Berufsbildung (SDBB)”. We are implementing a battery of psychological tests for career counseling which are connected to a web-platform developed by Netcetera, a big player in the Swiss IT market. We had a great opportunity of learning a lot about cross-platform communication, interface design, and project management.

Why has 2014 been such a successful year for you?

The year 2014 was dominated by the mentioned online test platform project which allowed us to employ our frontend developer Felix and double the employment rate of Tobias, our backend developer. I'm now leading our small but effective software development team which is a new and fun challenge. I feel we are getting better with every project and we have a good working atmosphere. Which is of course easy for me to say because it is me doing the code reviews and leaving comments about code details, not them ;).

Were there many new challenges you faced in your working life this year?

There were quite a few new challenges this year. As I mentioned, me and Markus are now leading a small developer team, which is a new role for both of us. We were also involved in a large IT project (the online test platform) which includes a collaboration with a big IT partner. There were new technical challenges, e.g. software interface design, server-side PDF rendering, and mailing service integration. I think we did quite well and mastered them one-by-one.

What projects are you engaged into in your leisure time?

Besides being a computer scientist, neuropsychologist, entrepreneur, mediocre philosopher, and scientific skeptic, Iam also a passionate flamenco guitar player. After playing accompaniment in dance classes of the University of Valencia, I started a new project together with a local flamenco singer. Besides playing modern flamenco, we are doing flamenco archeology, digging up dusty flamenco records, and playing small gigs. Being a foreigner in Valencia, I obviously need to prove we’re doing the real ancient purist flamenco, not the modern hipster gipsy kings rumba :).

What are your personal goals for the next year?

I don’t think much in terms of personal goals. Basically, I deal with what comes along, trying to make the best out of it. Life is complicated and chaotic, we never really know what is going to happen. Maybe I’ll move to Seville next year and start part-time flamenco studies at an academy. Then again, I said this last year, too.
Once I met a guy from Copenhagen in Barcelona who told me I should just go with the flow. Sounds shallow, but it’s not such bad advice if you know how to pick a good flow. That is what I did when I became partner at cloud solutions and moved to Spain. No regrets so far!

Veröffentlicht von am in CS News


As we promised a week ago, we're about to introduce everybody at cloud solutions until the end of the year. We asked everybody the same couple of questions and nobody edited the answers. They're fresh from the press. We start with Markus because he started us. Markus is Swiss, 38 years old and lives in Tallinn, Estonia with Anneli. Together they have a daughter and are expecting a son to be born in January.


How and when was your first contact with cloud solutions? How and why did you start working for cloud solutions?

My first contact with the seed of cloud solutions must have happened when my father bought me an IBM PC when I was 15. Later my uncle, also an IT company founder, invested his trust in me and gave me the projects I needed to develop as a web programmer. In 2006 I started working on a project that would lead to the founding of mh cloud solutions in 2009. When Sven and Adrian joined as partners in 2011, the name was changed to cloud solutions.

What was your most interesting project you have been working on in your time with cloud solutions?

That’s a question with no politically correct answer for a company owner :-). But I must say we have chosen our niche in such a way that the average level of interesting-ness of our projects is really high. With almost every project we can implement a unique and often highly innovative survey platform and that makes our work generally very interesting!

Why has 2014 been such a successful year for you?

I’d like to include the last two years for my summary. In that time we grew from 3 persons to 6 at cloud solutions and were able to double our annual turnover in both years. In 2013 we landed our biggest deal so far and in 2014 we implemented it with success. At the same time we manage to keep developing our own in-house framework without the need to depend on VC or such. At the end of 2014 I can wholeheartedly say that we’re really what we want to be as CS. Cool and friendly people in an modern and innovative company producing high quality survey solutions.

Were there many new challenges you faced in your working life this year?

Even with a micro firm like ours growth changes a lot in how you do things and what needs to be done. In that sense there are always new challenges as we grow. My role in CS has changed quite a lot from software developer to software project manager, an area where I can still learn plenty. Also being a father of a small child working in a home office is an ongoing challenge not to be underestimated.

What projects are you engaged into in your leisure time?

Apart from having kids, a very challenging and rewarding “project”, I spend most of my non-CS time for the craft brewery I founded together with three Estonian guys. We have been developing beer recipes for the last two years and starting from January 2015 our beer will be available in bars and shops. First in Estonian and soon hopefully also in Switzerland.

What are your personal goals for the next year?

Produce great software, brew great beer, be a good family man and most importantly, enjoy life.

Veröffentlicht von am in CS News
Who we are at cloud solutions

The first cold days have arrived, snow is falling and in some CS countries, Christmas markets spoil us with hot wine, pastries and Christmas lights. It is the time of the year, when most people are really busy finalizing their work projects and preparing themselves for the upcoming holidays to enjoy the last days of the year. It is also the time, when many of us start to think about what we have experienced and achieved in the past year.

We, from cloud solutions, have shared with you throughout the year, what projects we have been working on, what we have learned and how we have grown in numbers and wisdom.

We haven’t, however, shared much information about who we are, what we are passionate about and what we do besides making great software. We want you to get to know us a little bit better this December. This week we start with Markus Hausammann, who founded our innovative virtual company 5 years ago.

We hope you stay as curious as we are and will have a wonderful advent season.

Introduced so far: MarkusAdrianSven and Sina

Formen der Anwendungsentwicklung: Was sind native, hybride und web Apps?

Manches Software Start-Up oder Projektteam hat sich bereits in einer Situation wie der folgenden wieder gefunden: Man hat eine tolle Idee für eine Anwendung, die Investoren sind überzeugt, das Team ready und die Büroräume eingerichtet. Eigentlich könnte man sofort mit der Umsetzung beginnen. Doch bevor die ersten Zeilen Code geschrieben werden können, muss man sich für eine Umsetzungsform entscheiden. Keine einfache Wahl. Soll die Anwendung als responsive Web-Applikation umgesetzt werden, wie das immer häufiger getan wird oder sind die besonderen Funktionen von mobilen Geräten zentral für das vorliegende Problem?

Dieser Artikel gibt eine kurz Einführung in die drei Hauptvarianten der App-Entwicklung.

1. Webanwendung

Die Anwendung wird wie eine Internetseite zentral auf einem Server gehostet, auf die von allen Usern zugegriffen wird. Hierbei dient der Browser als Anwendungsumgebung und als Schnittstelle zum Betriebssystem. Der Entwickler kann deswegen davon ausgehen, dass die Webseite auf verschiedenen Betriebssystemen funktioniert, ohne dass spezielle Anpassungen nötig sind. Beispiele für Webanwendungen sind Google DocsToggl oder Facebook.


  • Webanwendungen sind Plattform übergreifend, da über jedes Gerät mit Internetbrowsern darauf zugegriffen werden kann, dadurch entsteht weit weniger Entwicklungsaufwand im Vergleich zu nativen Anwendungen.
  • Updates der Applikation müssen nicht installiert werden sonder sind sofort für alle Nutzer verfügbar.
  • Bei Verkauf muss keine Provision an App-Stores gezahlt werden.
  • Keine Limitierungen bei Design oder Funktionalität durch strenge Vorgaben der App-Stores (besonders Apple und Microsoft).


  • Es wird eine permanente Internetverbindung benötigt.
  • Aus Sicherheitsgründen wenig oder gar kein Zugriff auf Hardware und Features des Betriebssystems möglich (Kamera, Mikrofon, Speicherverwaltung,...).
  • Längere Reaktionszeiten als bei nativen Anwendungen, diese sind vor allem bei Touch-Eingaben bemerkbar.
  • Schlechtere Performance im Vergleich zu nativen Anwendungen.
  • Kein Verkauf über App-Stores möglich.
  • Browserkonformität kann problematisch sein, vor allem wenn ältere Versionen des Internet-Explorers oder verschiedene mobile Browser unterstützt werden sollen.

2. Hybride Anwendung

Eine hybride Anwendung vereint Vorteile der Web App und der nativen Anwendung. Bei hybriden Anwendungen wird eine native Hülle (engl. “wrapper”) erstellt, welche als Schnittstelle zum Betriebssystem dient und den Zugriff auf die Hardware des Geräts und auf Funktionen des Betriebssystems ermöglicht, wie zum Beispiel auf den Beschleunigungssensor, das GPS oder die Speicherverwaltung. Diese Hülle ist in einer für die Zielplattform verständlichen Sprache geschrieben. In diese Hülle wird anschließend eine Webseite gelegt, welche aus HTML, CSS und Javascript besteht. Vereinfacht kann gesagt werden, dass die Hülle ein Browser ist, bei dem alle Navigationselemente und Menüpunkte ausgeblendet wurden.

Bei hybriden Applikationen kann zusätzlich zwischen der Off- und der Onlinevariante unterschieden werden. Bei der Offline-Version wird die Internetseite direkt auf dem Gerät gespeichert. Bei der Online-Version muss bei dem Öffnen der App eine Internetverbindung bestehen, um die Webseite abrufen zu können.

Die hybride Form wird eher für die Entwicklung von mobilen Applikationen verwendet als für die Umsetzung einer klassischen Desktopanwendung. In Windows 8 gibt es aber bereits die Möglichkeit, hybride Desktopanwendungen zu realisieren.

Frameworks, welche den Entwickler dabei unterstützen, eine hybride Anwendung für verschiedene Betriebssysteme zu Verfügung zu stellen sind zum Beispiel titanium oder phonegap.


  • Verkauf in App-Stores ist möglich.
  • Haben das Aussehen einer nativen Anwendung, da Menüpunkte und die URL-Leiste des Browsers entfallen.
  • Zugriff auf Geräte- und Betriebssystemspezifische Funktionen ist möglich.
  • Sind im Vergleich zu Webanwendungen auch offline benutzbar, so lange die Anwendung keine permanente Datenverbindung benötigt.
  • Es entsteht weniger Entwicklungsaufwand als bei nativen Anwendungen.
  • Updates gelten sofort für alle Nutzer, sobald diese mit dem Gerät online gehen.


  • Schlechtere Performance im Vergleich zu nativen Anwendungen.
  • Längere Reaktionszeiten als bei nativen Anwendungen, vor allem bei Touch-Eingaben.
    Browserkonformität kann problematisch sein.
  • Verkauf bei mobilen Anwendungen meist nur über App-Stores möglich, dadurch auch Bindung an Shoprichtlinien.
  • Bugfixing kann problematisch sein wenn kein Framework verwendet wird, da die Wrapper  sich zum Teil anders verhalten als “normale” Browser.
  • App-Stores sind Wrappern gegenüber eher skeptisch.



3. Native Anwendung

Dies ist die “edelste” Form der Umsetzung. Hierbei wird die Anwendung speziell für die jeweilige Zielplattform (daher nativ) zugeschnitten und umgesetzt. Dadurch können über die Anwendung Funktionen und spezielle Software des Betriebssystems direkt angesteuert werden, wie zum Beispiel die Kamera, die Speicherverwaltung oder der Fingerabdrucksensor. Außerdem sind native Anwendungen potentiell sehr performant und eignen sich besonders für rechenintensive Applikationen, etwa für 3D Computer Spiele oder Aufgaben im Forschungsbereich.

Dies setzt jedoch voraus, dass die Anwendung in einer für das Betriebssystem verständlichen (“nativen”) Sprache geschrieben wurde. Für iOS, welches auf iPhone und iPads läuft, wäre das zum Beispiel Objective-C und für Android-Geräte Java. Für Windows 8 können Apps gleich in mehreren Sprachen umgesetzt werden - in C++.NET und sogar mit HTML, CSS und Javascript.

Die Umsetzung als native Anwendung ist daher oftmals die aufwändigste, da viele Funktionen selbst geschrieben werden müssen, die bei Webseiten der Browser bereitstellt (zum Beispiel die Arbeitsspeicherverwaltung).


  • Sehr Performant.
  • Verkauf in App-Stores möglich.
  • Zugriff auf alle Geräte- und Betriebssystemspezifische Funktionen.


  • Hoher Entwicklungsaufwand, vor allem bei Umsetzungen für mehrere Plattformen.
  • Verkauf bei mobilen Anwendungen meist nur über App-Stores möglich, dadurch auch Bindung an Shoprichtlinien.
  • Updates aufwändig.

Es wird leicht klar, dass es nicht eine beste Umsetzungsform gibt, sondern dass die Wahl stark von der jeweiligen Anwendung abhängt. Ist eine einfache ToDo-Liste für Mobilgeräte geplant, welche auch offline verfügbar sein soll, macht eine Umsetzung als hybride Anwendung Sinn. Möchte man ein Spiel entwickeln, bei dem es auf schnelle Reaktionen ankommt oder welches grafisch anspruchsvoll ist, sollte man nativ entwickeln. Und für eine Social Media Anwendung mit einer großen Nutzerbasis und vielen Kollaborationsmöglichkeiten macht es wahrscheinlich Sinn, diese sowohl als Webapplikation als auch als hybride Anwendung umzusetzen, um den Nutzern auch unterwegs eine bestmögliche Anwendungserfahrung zu ermöglichen.


Veröffentlicht von am in CS Tech
Sicherheit und Geschwindigkeit mit CloudFlare & Co

Sicherheit und Geschwindigkeit. Zwei der wichtigsten Eigenschaften jeder Website und zwei Gebiete mit hohem Spezialisierungsgrad. Um jederzeit den neusten Stand der Technik bieten zu können, benutzen wir seit einiger Zeit den Service CloudFlare. CloudFlare ermöglicht es uns, Webseiten sicherer und schneller laufen zu lassen, ohne selbst die dazu nötige Infrastruktur bereitstellen zu müssen. Es gibt eine ganze Reihe von ähnlichen Services wie z.B. Incapsula, Myracloud, MaxCDNCloudFront und etliche mehr.

Grundsätzlich operieren solche Dienste zwischen einer gehosteten Webseite und dem End-Nutzer, der die Webseite besucht. An dieser Stelle können von den Diensten verschiedene Features angeboten werden, welche schwerpunktmässig aber alle in die zwei Kategorien Sicherheit und Geschwindigkeit fallen.

Im Weiteren werden einige der Features näher beschrieben. Die nachstehenden Beschreibungen beziehen sich vorrangig auf das Beispiel von CloudFlare, die Überlappung mit ähnlichen Anbietern ist aber gross.

Content Delivery Network (CDN)

Ein Content Delivery Network (CDN) ist ein Netzwerk von Servern, welches Inhalte auf optimierte Weise an den End-Nutzer liefert. Mit Hilfe der global verteilten Servern eines CDN, wird in diesem Fall eine Webseite möglichst schnell – das heisst auf kürzestem Wege - an den Besucher der Seite übermittelt, was Antwortzeiten enorm verringern kann.

Abbildung: Veranschaulichung eines CDN

Die Server des CDN halten statische Daten (wie z.B. JavaScript, CSS und Bilder) der Webseite vorrätig und übertragen beim Aufruf der Webseite diese Daten direkt von einem der Server an den Besucher. Abhängig davon wo sich der End-Nutzer geographisch aufhält, liefert der am nächsten gelegene CDN Server die entsprechenden Daten.

Der dynamische Inhalt wird weiterhin direkt vom eigentlichen Hauptserver bereitgestellt, während alle statischen Inhalte durch das CDN zum Nutzer gelangen. Laut Cloudflare werden hierdurch Webseiten im Durchschnitt doppelt so schnell für Benutzer geladen.

Abbildung: Die Datenzentren des CloudFlare CDN

Web Content Optimization (WCO)

Wie gerade beschrieben, begründen sich die Vorteile eines CDN darauf, dass eine Webseite mittels der Infrastruktur näher an die End-Nutzer gebracht wird. Die Web Content Optimization (WCO) hingegen befasst sich nicht damit wie die Daten geliefert werden, sondern mit der Optimierung der zu liefernden Daten selbst. Mit unterschiedlichen Ansätzen führt also sowohl CDN als auch WCO zu einer schnelleren Webseite. Somit komplementieren sich beide gegenseitig für diesen Einsatz.
Web Content Optimization wird unter anderem durch folgende Massnahmen erreicht:

  • Bündeln von JavaScript-Dateien: Mehrere JavaScript-Dateien werden automatisch gebündelt damit sämtliche JavaScript-Dateien innerhalb eines Aufrufes übertragen werden. Hierdurch wird der Mehraufwand eingespart, der für mehrere Aufrufe nötig wäre, um jede Datei separat zu übertragen.
  • Asynchrones Laden: Durch asynchrones Laden von Ressourcen wie CSS- oder JavaScript-Dateien kann eine Webseite effektiv schneller laden und wird z.B. nicht durch das synchrone Laden eines grossen Scripts unnötig verzögert.
  • Komprimierung: Die Komprimierung der zu übertragenen Daten findet an dieser Stelle ebenso Anwendung. Bei einer Komprimierungsrate von beispielsweise 30%, beträgt der Geschwindigkeitsgewinn dieser Daten gleichermassen 30%, wegen der entsprechend geringeren Datenmenge.
  • Cache Header: Es werden automatisch die Einstellungen für den Cache Header dahingehend optimiert, damit das Caching des Browser eines Seitenbesuchers vorteilhaft genutzt wird, und damit unnötige neue Aufrufe vermieden werden.



Wie eingangs erwähnt, operiert ein Service wie CloudFlare zwischen der gehosteten Webseite und dem Seiten-Nutzer. Neben der besprochenen Geschwindigkeitsoptimierungen, können an dieser Stelle ausserdem wirksame Sicherheitsmassnahmen zum Einsatz kommen um Webseiten besser vor Gefahren im Netz zu schützen. Diese werden im Folgenden kurz vorgestellt:

  • Schutz vor DoS-Attacken: An diesem Punkt findet der Schutz vor Denial of Service (DoS) Attacken statt. Wird eine solche Attacke von der Infrastruktur erkannt, greifen die entsprechenden Abwehrmassnahmen, und der Angriff gelangt nicht bis zum Web-Server der zugrundeliegenden Webseite.
  • Web Application Firewall (WAF): Anhand einer Web Application Firewall können weitere Gefahren für eine Webseite abgewendet werden. So steht etwa ein automatischer Schutz für folgende typische Angriffe bereit:
    •     SQL Injection
    •     Spam in Kommentaren
    •     Cross-site scripting (XSS)
    •     Cross-site request forgery (CSRF)


Abbildung: Webseiten Analyse von CloudFlare mit Informationen zu erkannten Gefahren

Grundsätzlich ist jede Webseite diesen potentiellen Gefahren im Internet ausgesetzt. Anhand des Einsatzes von CloudFlare bzw. eines vergleichbaren Service können viele Gefahren bereits abgewehrt werden, bevor sie überhaupt bis zu der eigentlichen Webseite vordringen können.

Einzelne Webseiten können zusätzlich dadurch profitieren, dass diese Sicherheitsdienste auch für eine Vielzahl von weiteren Webseiten Anwendung finden, welche gleichfalls diesen Service benutzen. Auf dieser Grundlage muss sich die Gefahrenabwehr nicht auf eine individuelle Webseite beschränken, sondern kann all diese Webseiten umfassen. Wird zum Beispiel ein Angriff auf eine spezielle Webseite erkannt, so kann der Angreifer automatisch von sämtlichen Webseiten blockiert werden.


Wir sind bisher sehr zufrieden mit CloudFlare und die Benutzung hat sich in der Praxis gut bewährt. Unsere eigenen Server können sich häufiger ausruhen und wir profitieren von gepooltem Wissen und geteilter Hochleistungsinfrastruktur. Wie viele andere Services in der Cloud, bringt auch die Nutzung von CloudFlare & Co neue Probleme mit sich. Störungen von CloudFlare selbst können weitreichende Konsequenzen haben für die Erreichbarkeit von tausenden Seiten. Aus diesem Grund ist es wichtig, jederzeit eine funktionierende Fallback Lösung zu haben und sich damit nicht 100% abhängig zu machen.

Wir konnten bisher insgesamt signifikanten Nutzen aus den beschriebenen Vorteilen ziehen und unsere Infrastruktur wird durch diesen Service hervorragend ergänzt.

Veröffentlicht von am in CS Science
Handy data cleaning tool - CSV fingerprints

Recently I stumbled upon a handy little tool that may be interesting for everyone working with data in tables. An important but often tedious task is the cleaning of your dataset before you can actually start running statistical analyses. During this cleaning or mastering process you may find artifacts like the following:

  • Entries with unexpected data types: When test takers were expected to describe something in prose but a few entered a number instead.
  • Empty cells where no missing values are allowed: Maybe a mistake when entering paper pencil data manually.
  • A sudden shift of cell values to the right, causing a lot of values to fall into the wrong column: This happens, when data separation characters are used in the data itself.

If you've ever worked with larger sets of data, you surely know these or similar problems and have experience how hard it can be to spot them.

CSV Fingerprints gives you a very quick first visual of your data and can therefore save you a lot of time. Victor Powell, the author of this handy tool explains CSV Fingerprints in more details on his blog. There is also a full screen version of the tool available.

Tip: Don't try to copy&paste data directly from Excel, always copy the CSV from a text editor.

Show page in

Abonnieren Sie unsere Blogs

Um alle Kategorien zu abonnieren, klicken Sie links auf "Updates abonnieren".

Je nach Interesse können Sie auch einzelne Kategorien abonnieren.


Alle Blogs nach Kategorien