Find Jobs
Hire Freelancers

Need someone with Spark Scala, Hadoop and Hive Knowledge

₹12500-37500 INR

クローズ
投稿日: 1年以上前

₹12500-37500 INR

完了時にお支払い
Any one having the knowledge of following technologies can Bid: -programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka
プロジェクト ID: 34535500

プロジェクトについて

7個の提案
リモートプロジェクト
アクティブ 2年前

お金を稼ぎたいですか?

Freelancerで入札する利点

予算と期間を設定してください
仕事で報酬を得る
提案をご説明ください
登録して仕事に入札するのは無料です
この仕事に7人のフリーランサーが、平均₹22,786 INRで入札しています
ユーザーアバター
Hello. I am an expert in Big Data analytics and have 3 years of experiences in this field. And I have been worked as a data analyst in big data project team and mastered data sources(Strucutred and undstructured data), data cleaning and data mining, ETL using Apache Spark(python,scala). I have much experiences in Datawarehousing with Hive and HBase and Apache Kafka. And I mastered Data visualization with Tableau,Pentaho,Grafana. So I think that I am perfectly fit to your project. If you want to hire me, please contact with me on chat. Thanks.
₹25,000 INR 7日以内
0.0 (0 レビュー)
0.0
0.0
ユーザーアバター
I have knowledge in software testing including web and mobile testing. I have handled 4 testing project. I have knowledge in automation testing including web and mobile using appium .
₹25,000 INR 7日以内
0.0 (0 レビュー)
0.0
0.0
ユーザーアバター
Hi, I have 8 years experience in IT industry and 3+ years of experience in big data technologies like spark, scala, Hadoop, kafka, Unix , hive, sql, junit etc. Would be a good fit for your requirement. Thanks, Kopal
₹25,000 INR 15日以内
0.0 (0 レビュー)
0.0
0.0
ユーザーアバター
Hello, I can help you in the project. Computer engineer expert in Big data. Throughout my professional career I have developed my expertise in implementing data storage, transformation and visualization solutions. Highlight the development of real-time flows and service APIs to help business needs. Going into detail about my professional career, I have the following experiences: Design of architectures based on distributions: Cloudera/Hortonwork and Azure Cloud. Distributed data storage and processing: DFS, Hive, Impala and Kudu. NoSQL Database: MongoDB, Couchbase, Cassandra. Data ingestion in Big Data environments: Apache Nifi, Flume, kafka. Processing data in memory in Batch processes and Streaming: Apache Spark. User Tools: Hue, Oozie and Sqoop. Indexing and information search: ELK, SOLR. Maching Learning Project Methodology for Models predictive. Programming languages: Java, Kotlin, Scala, Python, R, C++, C, VHDL, JavaScript, HTML, AngularJs. Knowledge in data processing libraries: Numpy, Pandas, sickit-learnig. Knowledge of network calculation tools Neural TensorFlow and Keras. Knowledge in Anaconda and Jupyter Notebook. Knowledge in Azure, Azure Data Factory, Databricks.
₹25,000 INR 7日以内
0.0 (0 レビュー)
0.0
0.0
ユーザーアバター
Currently working as Hadoop administrator on Cloudera distribution for 3 clusters ranges from POC clusters to PROD clusters. Involved in designing and deploying a multitude application utilising almost all of the AWS stack. Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files. Architecture design and implementation of hadoop deployment, configuration management, backup, and disaster recovery systems and procedures. Responsible for cluster availability and experienced on ON-call support Experienced in Setting up the project and volume setups for the new Hadoop projects. Involved in snapshots and HDFS data backup to maintain the backup of cluster data and even remotely. Experience in importing and exporting data from different database like MySQL, Oracle, RDBMS. Good Knowledge on Kerberos Security. Responsibilities Installing and configuring Linux Administration and monitoring Linux Installing, Upgrading and Managing Hadoop Yarn Clusters Configured Hadoop High Availabilty Mapreduce performance tuning Managing and reviewing Hadoop and HBase log files. Provisioning, installing, configuring, monitoring, and maintaining HDFS, Yarn, HBase, Flume, Sqoop, Oozie, Pig, Hive Recovering from node failures and troubleshooting common Hadoop cluster issues Scripting Hadoop package installation and configuration.
₹22,000 INR 7日以内
0.0 (0 レビュー)
0.0
0.0

クライアントについて

INDIAのフラグ
UJJAIN, India
5.0
40
お支払い方法確認済み
メンバー登録日:10月 27, 2018

クライアント確認

ありがとうございます!無料クレジットを受け取るリンクをメールしました。
メールを送信中に問題が発生しました。もう一度お試しください。
登録ユーザー 投稿された仕事の合計
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
プレビューを読み込み中
位置情報へのアクセスが許可されました。
あなたのログインセッションの有効期限がきれ、ログアウトされました。もう一度ログインしてください。