絞り込む

私の最近の検索
次の条件で絞り込む:
予算
to
to
to
種類
スキル
言語
    ジョブステータス
    2,000 hadoop hbase 仕事が見つかりました。次の価格: USD

    / fs needs reconfiguration for hdfs layout

    $25 / hr (Avg Bid)
    $25 / hr 平均入札額
    8 入札

    Includes Java in coding part and other than that we require experience in Aws, Hadoop, and Spark

    $5 / hr (Avg Bid)
    $5 / hr 平均入札額
    6 入札

    Hi, I am looking data analyst job timing is US healthcare claims and provide required support in Excel,sql,Db2,Hadoop,Informatica (basics).Daily one or two hours

    $474 (Avg Bid)
    $474 平均入札額
    29 入札

    We are is searching for an accountable, multitalented data engineer to facilitate the operations of our data scientists. The data engineer will be responsible for employing ...technological advancements that will improve the quality of your outputs. Data Engineer Requirements: Bachelor's degree in data engineering, big data analytics, computer engineering, or related field. Master's degree in a relevant field is advantageous. Proven experience as a data engineer, software developer, or similar. Expert proficiency in Python, C++, Java, R, and SQL. Familiarity with Hadoop or suitable equivalent. Excellent analytical and problem-solving skills. A knack for independence and group work. Scrupulous approach to duties. Capacity to successfully manage a pipeline of duties with ...

    $43 / hr (Avg Bid)
    $43 / hr 平均入札額
    18 入札

    diseño y creación de una infraestructura OpenStack para implementar una plataforma Big Data basa en hadoop/Spark. así como la implementación de la misma. Dentro del proyecto se necesitan tres perfiles: Administrador OpenStack Ingeniero Open Stack Desarrollo catálogo IT Los trabajos se realizarán mayoritariamente en Madrid más detalles en el archivo adjunto

    $33931 (Avg Bid)
    $33931 平均入札額
    4 入札

    ...has experience in writing on topics like AWS Azure GCP DigitalOcean Heroku Alibaba Linux Unix Windows Server (Active Directory) MySQL PostgreSQL SQL Server Oracle MongoDB Apache Cassandra Couchbase Neo4J DynamoDB Amazon Redshift Azure Synapse Google BigQuery Snowflake SQL Data Modelling ETL tools (Informatica, SSIS, Talend, Azure Data Factory, etc.) Data Pipelines Hadoop framework services (e.g. HDFS, Sqoop, Pig, Hive, Impala, Hbase, Flume, Zookeeper, etc.) Spark (EMR, Databricks etc.) Tableau PowerBI Artificial Intelligence Machine Learning Natural Language Processing Python C++ C# Java Ruby Golang Node.js JavaScript .NET Swift Android Shell scripting Powershell HTML5 AngularJS ReactJS VueJS Django Flask Git CI/CD (Jenkins, Bamboo, TeamCity, Octopus Deploy) Puppet/Ansible...

    $34 (Avg Bid)
    $34 平均入札額
    23 入札

    We are leading training center Ni analytics india looking for Experienced Data Engineer to train our students online live class on weekdays / weekends. ideal candidate should have data engineer work experience of 4 to 8 years on Bigdata hadoop, spark, pyspark, kafka, azure experience etc. we are requesting interested candidates within our budget to respond as we get regular enquiry from individual or corporate firms. this is urgent requirement kindly respond quickly. thank you

    $367 (Avg Bid)
    $367 平均入札額
    4 入札

    ...disk volume of a powered down vm, causing vdfs missing file. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions per ...

    $83 (Avg Bid)
    $83 平均入札額
    9 入札

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    $32 / hr (Avg Bid)
    $32 / hr 平均入札額
    5 入札

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    $85 (Avg Bid)
    $85 平均入札額
    3 入札

    ...volume of a powered down vm, obviously that does not end well. Need to figure out how to recover the missing volume if at all possible. Also, there should be an old backup of the vm if we can't fix it but need to try the recovery first. Task: 1. Recover volume on vm. 2/3. Move VM backups/copies from 4 existing vm's to new 4 tb HDD drive (currently unmounted). These 4 vm's host 4 nodes of a hadoop CDH cluster environment so the VM's can have their disk partitions safely expanded. Currently they share hdd's so they are limited in size. 4. In those existing 4 vm's, maintain existing partitions, expand storage to utilize full capacity of 1x4tb drive per vm for 4x4tb HDD's, 1 mounted to each VM. There should currently be 4 partitions pe...

    $22 (Avg Bid)
    $22 平均入札額
    2 入札

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    $665 (Avg Bid)
    $665 平均入札額
    7 入札

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    $884 (Avg Bid)
    $884 平均入札額
    6 入札
    Data Analyst 終了 left

    Digital Analyst: Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduc...understanding of digital and data analytics • Excellent written, oral, and communication skills • Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy • Keen eye for UI on PBI/Tableau – can recommend designs independently • Can handle complicated data transformations on DBs & Big Data (Hadoop) • Familiar...

    $12 (Avg Bid)
    $12 平均入札額
    2 入札
    Hive Projects 終了 left

    A mini project with report, source code on any topic in HIVE and Hadoop program projects.

    $65 (Avg Bid)
    $65 平均入札額
    3 入札
    Digital Analyst 終了 left

    Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduct analytics deep di...understanding of digital and data analytics • Excellent written, oral, and communication skills • Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy • Keen eye for UI on PBI/Tableau – can recommend designs independently • Can handle complicated data transformations on DBs & Big Data (Hadoop) • Familiarit...

    $28 (Avg Bid)
    $28 平均入札額
    6 入札
    Hadoop EMR setup 終了 left

    Hadoop EMR setup and Data migration from azure to AWS

    $19 / hr (Avg Bid)
    $19 / hr 平均入札額
    11 入札
    Hadoop Expert 終了 left

    Looking for a person who can help me install a Hadoop

    $5 / hr (Avg Bid)
    $5 / hr 平均入札額
    2 入札

    .../ Define the problem. Create Tables with constraints Design a Schema based on tables and explain the schema. Create primary keys, foreign keys. Create Procedures. Create functions. Create Views Create Index Use of the following Clauses: Example : order by, between, group by, having, order by, AND, OR, with Use Aggregate Functions Use of nested queries, Scalar Subquery. Part 2 has to be done in HBASE Create Tables – 4 tables with Column family and columns Column family - 5 column families: Make sure have different parameter. Ex: versions Minimum 4 Columns in each Column family Insert records Delete records Perform basic queries like your assignment1 Try to extract data using timestamp Insert partial data in a row Describe table. Check table status – enabled or disable...

    $145 (Avg Bid)
    $145 平均入札額
    33 入札

    .../ Define the problem. Create Tables with constraints Design a Schema based on tables and explain the schema. Create primary keys, foreign keys. Create Procedures. Create functions. Create Views Create Index Use of the following Clauses: Example : order by, between, group by, having, order by, AND, OR, with Use Aggregate Functions Use of nested queries, Scalar Subquery. Part 2 has to be done in HBASE Create Tables – 4 tables with Column family and columns Column family - 5 column families: Make sure have different parameter. Ex: versions Minimum 4 Columns in each Column family Insert records Delete records Perform basic queries like your assignment1 Try to extract data using timestamp Insert partial data in a row Describe table. Check table status – enabled or disable...

    $45 (Avg Bid)
    $45 平均入札額
    10 入札

    Linux+Hadoop cloud migration azure Data and on prem Data (Cloudera hadoop) to AWS Cloudera Azure AWS DEVOPS Database Migration from on prem to AWS

    $19 / hr (Avg Bid)
    $19 / hr 平均入札額
    10 入札

    ※ Please, see the attached, and offer your price quote with questions [Price and time is negotiable] ※ Will need your help from end of Dec ~ Jan, 2023 1) Manual : Creating development and installation manual for overall service implementation guideline using HDFS – Impala API >All details must be provided : command/option/setting file/Config etc. > We will use your manual to create our own HDFS used solution >Additional two to four weeks of take-over time [We can ask some questions when the process does not work under the manual process] 2. Consulting : Providing solutions for the heavy load section(date inter delay) when data is insert through HDFS >Data should be processed in 3 minutes, but sometimes it takes more time > Solutions for how we can remove or de...

    $999 (Avg Bid)
    $999 平均入札額
    7 入札

    Hadoop,linux, anisible,cloud and good communication skills required

    $7 / hr (Avg Bid)
    $7 / hr 平均入札額
    1 入札

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $100 (Avg Bid)
    $100 平均入札額
    4 入札

    Need bigdata and Hadoop tools some them like spark sql, Hadoop, hive and databricks , data lakes

    $30 (Avg Bid)
    $30 平均入札額
    6 入札

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $105 (Avg Bid)
    $105 平均入札額
    5 入札

    Require a developer who has good experience in devops support for 2 to 3 years, Which includes Hadoop Services windows, Linux and Ansible with little cloud touch.

    $8 / hr (Avg Bid)
    $8 / hr 平均入札額
    7 入札

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $123 (Avg Bid)
    $123 平均入札額
    4 入札

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $140 (Avg Bid)
    $140 平均入札額
    6 入札

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $97 (Avg Bid)
    $97 平均入札額
    3 入札

    The objective of this assignment is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time.

    $120 (Avg Bid)
    $120 平均入札額
    16 入札

    1. Implement the straggler solution using the approach below a) Develop a method to detect slow tasks (stragglers) in the Hadoop MapReduce framework using Progress Score (PS), Progress Rate (PR) and Remaining Time (RT) metrics b) Develop a method of selecting idle nodes to replicate detected slow tasks using the CPU time and Memory Status (MS) of the idle nodes. c) Develop a method for scheduling the slow tasks to appropriate idle nodes using CPU time and Memory Status of the idle nodes. 2. A good report on the implementation with graphics 3. A recorded execution process Use any certified data to test the efficiency of the methods

    $186 (Avg Bid)
    緊急
    $186 平均入札額
    11 入札
    Stack : DATA ENG 終了 left

    Stack : DATA ENG 1. AWS 2. SPARK / HADOOP 3. PYTHON 4. Terraform

    $13 / hr (Avg Bid)
    $13 / hr 平均入札額
    3 入札

    I have an input text file and a mapper and reducer file which outputs the total count of each word in the text file. I would like to have the mapper and reducer file output only the top 20 words (and their count) with the highest count. The files use and I wanna be able to run them in hadoop.

    $138 (Avg Bid)
    $138 平均入札額
    12 入札

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals

    $11 / hr (Avg Bid)
    $11 / hr 平均入札額
    2 入札

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals

    $17 / hr (Avg Bid)
    $17 / hr 平均入札額
    2 入札

    I need help with freelance with strong knowledge in StreamSets Data Collector and/or Flink Needed freelancer with experience in Flink, Hadoop and StreamSets Data Collector for about 10 hours of consultation. 1.- I want to extract data from DB and generate every 15 minutes aggregation files ensuring there is not missing data among intervals when query is running using StreamSets. 2.- Beside that looking for a Flink options to extract data from Kafka and using tumble aggregation intervals Please contact me asap Thanks David

    $18 / hr (Avg Bid)
    $18 / hr 平均入札額
    21 入札

    I have some problems to be completed using Hadoop

    $12 (Avg Bid)
    $12 平均入札額
    1 入札

    Hi, we are looking for experienced person in "Hadoop" Need to Give Job Support By connecting remotely and taking mouse controls for Indian guy living in US USD- 300$/Month 2hrs/day 5days/week Timings- Anytime After 7P.M IST will works Any 2hrs Before 10a,m IST

    $250 (Avg Bid)
    $250 平均入札額
    1 入札

    Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure

    $409 (Avg Bid)
    $409 平均入札額
    15 入札

    Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure

    $129 (Avg Bid)
    $129 平均入札額
    8 入札

    ...ORDER BY AVG(d_year) Consider a Hadoop job that processes an input data file of size equal to 179 disk blocks (179 different blocks, not considering HDFS replication factor). The mapper in this job requires 1 minute to read and fully process a single block of data. Reducer requires 1 second (not minute) to produce an answer for one key worth of values and there are a total of 3000 distinct keys (mappers generate a lot more key-value pairs, but keys only occur in the 1-3000 range for a total of 3000 unique entries). Assume that each node has a reducer and that the keys are distributed evenly. The total cost will consist of time to perform the Map phase plus the cost to perform the Reduce phase. How long will it take to complete the job if you only had one Hadoop worker n...

    $200 (Avg Bid)
    $200 平均入札額
    1 入札

    I need someone to solve the attached questions They're about map reduce and Hadoop and pig and require python skills as well I attached an example of some expected solutions

    $20 (Avg Bid)
    $20 平均入札額
    10 入札

    I can successfully run the Mapreduce job on the server. But when I want to send this job as yarn remote client with java(via yarn Rest api), I get the following error. I want to submit this job successfully via Remote Client(Yarn Rest Api.)

    $12 (Avg Bid)
    $12 平均入札額
    3 入札

    Looking for Python and Scala expert, Candidate should have knowledge in Big data domains such as Hadoop, spark, hive, etc. Knowledge of Azure Cloud is a plus. Share your CV.

    $711 (Avg Bid)
    $711 平均入札額
    8 入札

    block matrix addition should be done using map reduce

    $60 (Avg Bid)
    $60 平均入札額
    2 入札
    hadoop project 終了 left

    block matrix addition should be done using map reduce

    $190 (Avg Bid)
    $190 平均入札額
    11 入札

    Current use technology stack – Apache Hadoop – (3. 1version ) cluster production. Urgently deployment of a resource who has experience in Azure Data lake migration , Hadoop, Kafka and NiFi. Setup the minimum required services and setup the data lake on azure then migrates sample data that customer will provide. We're looking for Hadoop Developer for Three Months contract role. It's purely work from home and flexible timings. Please get back to us if you're interested. Job Description is given below. 1. Current use technology stack – Apache Hadoop – (3. 1version ) cluster production. 2. Urgently deployment of a resource who has experience in Azure Data lake migration , Hadoop, Kafka and NiFi. 3. Setup the minimum r...

    $13 / hr (Avg Bid)
    $13 / hr 平均入札額
    4 入札
    Data Scientist 終了 left

    Job Description: Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models a...Data Science or other quantitative field is preferred. 3-5 Years of proven experience as a Data Scientist. Experience in DataRobot or any similar tool Experience in data mining Understanding of machine-learning and operations research Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentat...

    $22 / hr (Avg Bid)
    $22 / hr 平均入札額
    21 入札