2nd step is bringing different features of my choice or create a new feature based off of their features evaluated.
2nd part will bring the whole research on transmembrane helix helix interaction.
1st part. Which is the best classifier and why.
Using their data to reproduce the data and then pursue a specific area
1. Setting the Computing environment is what I have been doing
2. The program runs and specifically retrieving the def1&2 output results
3. To run their code, I have been using a (software) linux system and (program) perl is the script language and package in perl used as well to produce information Random forest
4. RF a tool they used to help create an input/output off the data in the pipeline they created. (Knowing I can rewrite the Random forest script is necessary)
5. This paper is pretty generic; I am not gathering extra data. Using their data to reproduce this
6. Figuring out the Def1 &Def2 and their input file and after running it this what I receive for their output file ([login to view URL] shows the commands)
7. Nohup throughout this period is how I saved my major error messages in the file
The second step after reforming the codes to run through our classifiers in R or python, and we evaluate to make a prediction.
In conclusion creating a tool that will find some type of common motif, patterns, and features.
So Basically compare and contrast old features Or we can concatenate/combine old & New features