Since I joined Alpha-Info, I have approached several useful and cutting-edge tools. The best thing among them is that they are all open sources project, which is really good for start-up company. I’m gonna list all tools and briefly sum up where you can use them, although I don’t have full knowledge of some of them.
1. Thrift
It’s an amazing tool which facilitates communication between client and server. Take the project I’m working on as an example, our server code is implemented using C++, while client end supports Android and iOS. Traditionally, we should deal with low-level socket and HTTP package. Also, different platforms like Android and iOS have to establish their own channels for sending requests. Thanks god we have Thrift ! All you should do is write a file which defines required class, member functions, variables… and so on. Then, you can generate server and client end codes simultaneously. Anything else? No, we’re all good here.
Using same example above :
a. define API in base.thrift file
b. Supposed to use C++ for server code
thrift -r --gen cpp base.thrift
-> which generates .h & .cpp files
c. java for first client code (Android)
thrift -r --gen java base.thrift
-> which generates .java files
d. cocoa for second client code (iOS)
thrift -r --gen cocoa base.thrift
-> which generates .h & .m files
Those generated files contains basic implementation of low-level communicating code. Thus, developers can just concentrate on service itself.
2. Scrapy
Scrapy is a python-based framework for web crawling. In our project, we use crawler to fetch some image and title data of clothes from e-comm websites.
The below roughly introduce basic operations step by step:
a. installed scrapy package using pip install
b. new a scrapy project
scrapy startproject <project_name>
c. go to the deep folder named “spiders” and create your “spider.py” here
d. At the first line of your class in spider.py, set “name” attribute
name = "example"
e. Then, after you’ve done implementing your spider.py code, type below command in the terminal :
scrapy crawl example
Then your program will start crawling!
3. MongoDB
MongoDB is the No-SQL database, which is more flexible and scalable than SQL database.
The only thing I’ve done with MonogoDB is to store data fetched by Scrapy into MongoDB. Still need tons of efforts on it.
4. Docker
To my little knowledge, with Docker, we don’t need to install virtual machine anymore. In the past, if we want to run two services (let’s say mongoDB and redis server), we may need to install two VM with two OS on our physical machine (with Host OS on it). Docker is a tool we can pack each application into each “container”, which can be run as image on kernel level. What’s more, for cross-platform development, we don’t need to build environment for each machine. All we should do is install Docker, pass built image machine by machine, and every machine is equipped with the environment we want.
5. CMake
CMake can help you define dependencies, static libraries(.lib, .a), dynamic libraries(.so or .dll) which are required by source codes in a file. Then, you just cmake the project with CMakeFileList.txt inside and this tool will generate makeFile which links everything for you (you can even choose the kind of project you’d like to generate, XCode for instance.) Eventually, just make makeFile for executable binaries.
6. Redis
As shown on the official website,
Redis is an open source, BSD licensed, advanced key-value cache and store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, sorted sets, bitmaps and hyperloglogs.
7. HomeBrew
HomeBrew can assist Mac User install some libraries and packages, just like yum、apt-get in other linux-based system.
8. Actor
Don’t know what it can do yet.
9. Redmine
For project management, you can assign tasks to members, trace bugs or issues, list problems…and so on.
10. HipChat
Group chatting tool which integrates with Gits. Any member pushing code to the Git will show as message on the chatting window.