When formulating this response, I considered two articles in addition to my own thoughts and opinions:
The Future of Programming
Lifecycle Planning
The Future of Programming, by Robert Scoble, had me thinking before I even started to read or watch the video. I saw that the article was talking about programming in the cloud (using Cloud 9 as their example). Last year, I purchased a Chromebook and I was searching low-and-high for a cloud-based IDE that would allow me to execute and store code reliably. Cloud 9 was the platform I eventually landed on. While Cloud 9 is not the greatest IDE. For example, hidden characters can get included in files and cause issues - my problem was hidden characters coding for spacing and, using Python at the time, resulted in terrible, relentless issues only solvable by using VI. Which brings me into my next point: the terminal. Cloud 9 builds a lightweight virtual machine for each of their users. This has a multitude of benefits including added security (it is not possible for one user to tamper with another), terminal usage (great for developers, like me, who move around using the command line a lot), and structured file management (as is inherent with a full virtual machine). I only used Cloud 9 for python development, but, from the video, it is so evident that Cloud 9 is a multi-purpose IDE. I could develop a full-fledged project with a group of collaborators because of the cloud-based nature of this program. The cloud-based environment allows for live collaboration. This means that version control is not an issue because merge conflicts just will not happen if all the coding is done on Cloud 9. Additionally, Cloud 9 is not slow (unless you have not logged in for a few weeks and they have to reboot your virtual machine). I was able to conduct pretty heavy duty genetic algorithms on the cloud with Cloud 9 when I was in my Data Mining class. All things considered, programming, in the future, is not going to require all these base installs. We're slowly moving to a cloud-based world and cloud-based solutions are going to be the future. As a user, do you want to navigate to a site, download a program, install it, then have the ability to run it? Or, as a user, would you rather just navigate to a site? The answer is obvious.
When considering Software Engineering, it is always important to consider your design model. Most everyone involved in any form of Software Engineering is familiar with the waterfall model. The waterfall model, essentially, flows like this:
Software Concept <-> Requirements Analysis <-> Architectural Design <-> Detailed Design <-> Coding and Debugging <-> System Testing
My biggest problem with this model is that it is impossible to have a complete idea of the Software Concept at the beginning of the project. Also, the more important reason, Requirements Analysis is such a tedious problem to break down into all of its atomic elements from the beginning. Software developers do not know what the specific details of each feature are from the very beginning. It is a continuous process of finding out new things about the requirements, whether it is from limitations you find out later, a client that did not detail everything the correct way, etc. While the waterfall model is bidirectional, climbing back up the waterfall is always described as 'being difficult' (going left with the way I detailed the model). There is later mention of an overlapping waterfall model, but, really, that is just an attempt to grab up the good from the spiral model and add it to the waterfall model. This is because you can go back phases and elicit more requirements/functions/tests/etc, but all without ruining your model (more on spiral later).
Next, I will address the Code-and-Fix model. This model is pretty funny to me because it really is the model programmers use when they first start coding. You do not really know the structure of your applications and you just build in new features without good documentation or test cases. It really is the trial-and-error and 'hope I remember' form of coding. This is a bad form of coding because you never remember everything. Unfortunately, there are a few pieces of my big project right now where I have adopted this approach, but it is slowly undergoing major change in order to meet documentation and requirements analysis standards.
Next, the spiral model, I feel, is the most popular and efficient approach to software development. This is because you will never be able to elicit all your requirements, as I've previously mentioned, at the beginning. A continually developed set of requirements is the way to go about a project. This way new features can be added as needed, tests can be developed with software as new cases or exceptions are thought up. All in all, starting on an extremely small scale and steadily growing is how most projects tend to fall for these very reasons.
The evolutionary prototyping model is interesting because it really means you are having trouble with the requirements. I feel like this model is good for showing the customer what their software will look like in the end, but without all the tedious backend development because the customer could just be ignorant of what they actually desire. This process runs into issues with time management because a lot of time can get wasted pretty quickly on designs that the customer rejects.
The Staged Delivery and Design-to-Schedule models are extremely similar so I will consider them together. The heart of each of these models seem to be rooted in the same place as Agile Development. With deliveries staged throughout the design process, the developers never fall behind and, if a customer does have a problem with a requirement, then it can be fixed earlier rather than later. This is the preferred method of software development and software engineering, I feel. The idea of conducting specific tests on specific pieces of software built in iterations is the best way to do it. You can set up small test cases and build features that only fit those tests and do this rather quickly because the problem has been broken up into an almost atomic manner.
Considering everything, the type of software development and engineering cycle you use will depend on the type of project you are conducting, but there are reasons to use and not to use each of the models. What does this mean for the future of programming, though? With cloud resources becoming easier and easier to consume, then does that mean the software development and engineering cycle could become the same way? Yes, yes it does. Just the mere fact that you can collaborate with someone on a single file from the opposite side of the world at the same time without running into version control conflicts is amazing. If I'm a customer, then I can test a product's (provided I have a slight amount of domain expertise) features without having to be in a physical meeting with a software team. This means that meetings can also take place over the cloud and could be even more efficient than in the past. The future is now and we should all embrace it.
Music listened to while blogging: Mysonne & Tech N9ne
No comments:
Post a Comment