Find Jobs
Hire Freelancers

Setup a TF CycleGAN for a specialized text-image-image-text data compression research project

$10-30 USD

クローズ
投稿日: 4年以上前

$10-30 USD

完了時にお支払い
I have $30. What I want is unusual, in the realm of generator-adversarial networks, and may change the future of computer science, and may earn you additional contracts, but not from me. What I want is complete setup for a specialized neural network, running in Tensorflow, that can be trained on commodity CPU's using vast.ai. Let me be clear- what I want is a folder with some files and instructions, where it is assumed I am running stock ubuntu 19.10, and I have a [login to view URL] account and a credit card, and know nothing else besides how to execute bash scripts, use SSH, and install packages. I want everything scripted so all I have to do is follow some instructions and then train the network. You are required to test this network yourself in order to verify it will improve with time. The network must be configurable to train and also to generate results, both final and intermediate. For it to function in a useful manner, I must be able to, later on, configure it to dump intermediate products, and also to be able to accept intermediate products as a command line input. [login to view URL] [login to view URL] The outcome I want is a novel trained model which achieves maximum possible compression for a specific data set (ascii text files) with the minimum symbols library necessary to reconstitute it, by virtue of having intuitively learned the mechanics required to compress text. A compression engine only achieves compression by saving bits overall. If the product of this engine is a model that can create compressed forms of text and reconstitute them, but, when the model's files are added to the compressed text and weighted up, weighs just as much as the original file did, it has not managed to learn to compress the data. For this purpose we must train the engine with a large amount of inputs. I want to take a very large text corpus, pure ASCII, pure english sentences- the calgary corpus, some article text off wikipedia, some long books from the gutenberg project, [login to view URL] content- convert them(temporarily) to a PNG or other suitable image format- feed the image to a CYCLEGAN encoder- extract an intermediate product- weigh it- feed the intermediate product to the generator- then use a comparative to determine if the output matches the input. The network is "rewarded" when the amount of bits in the intermediate result is reduced. The encoder and the generator, in this case, both play a role in "learning". The encoder is learning to compress, the generator, to decompress. The discriminator is not necessarily a functional part of this mechanism, since it's only role is to verify that the text is truthful. We are not trying to fool a discriminator here. We are using the intermediate product of an image in an attempt to naively discover vector relationships with semantic structures in text that can result in arbitrary shapes that have novel dimensional complexity and also discover the most common such shapes, analogous to symbol libraries, and thus learn the most optimum way to compress the input.
プロジェクト ID: 22161051

プロジェクトについて

リモートプロジェクト
アクティブ 4年前

お金を稼ぎたいですか?

Freelancerで入札する利点

予算と期間を設定してください
仕事で報酬を得る
提案をご説明ください
登録して仕事に入札するのは無料です

クライアントについて

UNITED STATESのフラグ
Tipton, United States
0.0
0
メンバー登録日:11月 5, 2019

クライアント確認

ありがとうございます!無料クレジットを受け取るリンクをメールしました。
メールを送信中に問題が発生しました。もう一度お試しください。
登録ユーザー 投稿された仕事の合計
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
プレビューを読み込み中
位置情報へのアクセスが許可されました。
あなたのログインセッションの有効期限がきれ、ログアウトされました。もう一度ログインしてください。