Ideapoke is a crowdsourcing software platform which allows companies to showcase their product, services or technologies and solve business problems by partnering with academia, experts and other companies.
About Us: Helical IT, based out of Hyderabad, is a software company that specializes in Open Source Data Warehousing & Business Intelligence, servicing clients in various domains like Manufacturing, HR, Energy, Insurance, Social Media Analytics, E-commerce, Travel, etc. Job Description: Hands-on Experience with AWS and AWS Glue Mandatory Demonstrated strength in data modeling, ETL development, and data warehousing Hands-on Experience using big data technologies (Hadoop, Hive, Hbase, Spark, etc.) Apache Spark mandatory Hands-on Experience using Spark, SQL Hands-on Experience using programming language – Scala, python, R, or Java (any one) Strong database knowledge Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy Agile development and understanding Nice to Have: Experience using business intelligence reporting tools Experience on AWS Quicksight Database understanding Postgres, SQL Server, Cassandra, S3, Hadoop Performance tuning of spark jobs Any BI tool knowledge like tableau, Jasper, Pentaho, helical insight Skills and Qualification: BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject. Having experience of 2+ years. Ability to work independently. Good written and oral communication skills
Want to shape the future of Energy through Data Science? We all know that without good data there is no Data Science. If you let garbage in, it will emit the garbage out. 60-70% efforts in a data science project are spent in Data Engineering & Feature Engineering. That's where we need your skills to fetch the data from disparate sources, transform it the way business needs (that may also include applying lots of critical business logics per its source and nature) and load it in a data warehouse / big data systems. These critical pieces of works complement the Data Scientist, with a continuous feedback loop based on how a model is performing and what fine tuning is needed in the data. The Energy Exemplar (EE) data team is looking for an experienced Data Engineer to join our Pune office. As a dedicated Data Engineer on our Research team, you will apply data engineering expertise, work very closely with the core data team to identify different data sources for specific energy markets and create an automated data pipeline. The pipeline will then incrementally pull the data from its sources and maintain a dataset, which in turn provides tremendous value to hundreds of EE customers. At EE, you’ll have access to vast amounts of energy-related data from our sources. Our data pipelines are curated and supported by engineering teams. We also offer many company-sponsored classes and conferences that focus on data science and ML. There’s great growth opportunity for data science at EE. Responsibilities Develop, test and maintain architectures, such as databases and large-scale processing systems using high-performance data pipeline. Recommend and implement ways to improve data reliability, efficiency, and quality. Identify performant features and make them universally accessible to our teams across EE. Work together with data analysts and data scientists to wrangle the data and provide quality datasets and insights to business critical decisions. Take end-to-end responsibility for the development, quality, testing, and production readiness of the services you build. Define and evangelize Data Engineering best standards and practices to ensure engineering excellence at every stage of development cycle. Act as a resident expert for data engineering, feature engineering, exploratory data analysis. Qualifications 2+ years of professional experience in developing data-pipelines for large-scale, complex datasets from varieties of data sources. Data Engineering expertise with strong experience working with Big data technologies such as Hadoop, Hive, Spark, Scala, Python etc. Experience working with Cloud based data technologies such as Azure Data lake, Azure Data factory, Azure Data Bricks highly desirable. Knowledge and experience working with database systems such as Cassandra, HBase, Cosmos etc. Moderate coding skills. SQL or similar required. C# or other languages strongly preferred. Proven track record of designing and delivering large-scale, high quality systems and software products. Outstanding communication and collaboration skills. You can learn from and teach others. Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services. Experience with prediction in adversarial (energy) environments highly desirable. A Bachelor or Masters degree in Computer Science or Engineering with coursework in Statistics, Data Science, Experimentation Design, and Machine Learning highly desirable.
Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics? What is your role? As a Computer Vision & Machine Learning Engineer at Takvaviya, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Takvaviya grows. What you’ll do You will be working with the core R&D team which drives the computer vision and image processing development. Build deep learning model for our data and object detection on large scale images. Design and implement real-time algorithms for object detection, classification, tracking, and segmentation Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions. Automate the workflow process between the fast-paced data delivery systems. What we are looking for 1 to 3+ years of professional experience in computer vision and machine learning. Extensive use of Python Experience in python libraries such as OpenCV, Tensorflow and Numpy Familiarity with a deep learning library such as Keras and PyTorch Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO Experienced on hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree Dockerization, AWS modules and Production level modelling Prefered Requirements Experience with Qt, Desktop application development, Desktop Automation Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS About Takvaviya Analytics: Takvaviya Analytics, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach. With Sales & Operations based out of US, Europe & India, Takvaviya is a team of 32 people located across different geographies and with varied domain expertise and interests. A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.
Job Description: The Big Data Engineer at Draup is responsible for building scalable techniques and processes for data storage, transformation and analysis. The role includes decision-making and implementation of the optimal, generic, and reusable data-platforms. You will work with a very proficient, smart and experienced team of developers, researchers and co-founders directly for all application use cases. What You Will Do: Develop, maintain, test and evaluate big data solutions within the organisation. Build scalable architectures for data storage, transformation and analysis. Design and develop solutions which are scalable, generic and reusable. Build and execute data warehousing, mining and modelling activities using agile development techniques. Leading big data projects successfully from scratch to production. Creating a platform on top of stored data sources using a distributed processing environment like Spark for the users to perform any kind of ad-hoc queries with complete abstraction from the internal data points. Solve problems in robust and creative ways. Collaborate and work with Machine learning and harvesting teams. What You’ll Need: Proficient understanding of distributed computing principles. Must have good programming experience in Python. Proficiency in Apache Spark (PySpark) is a must. Experience with integration of data from multiple data sources. Experience in technologies like SQL and NoSQL data stores such as Mongodb. Good working Knowledge of MapReduce, HDFS, Amazon S3. Knowledge of Scala would be preferable. Should be able to think in a functional-programming style. Should have hands-on experience in tuning software for maximum performance. Ability to communicate complex technical concepts to both technical and non-technical audiences Takes ownership of all technical aspects of software development for assigned projects.
Looking out for Internship Candidates . Designation:- Intern/ Trainee Technology : .NET/JAVA/ Python/ AI/ ML Duration : 2-3 Months Job Location :Online Internship Joining :Immediately Job Type :Internship Job Description - MCA/M.Tech/ B.Tech/ BE who need 2-6 months internship project to be done. - Should be available to join us immediately. - Should be flexible to work on any Skills/ Technologies. - Ready to work in long working hours. - Must possess excellent analytical and logical skills. - Internship experience is provided from experts - Internship Certificate will be provided at the end of training. - The requirement is strictly for internship and not a permanent job - Stipend will be provided only based on the performance.
We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus
About Turing: Turing enables U.S. companies to hire the world’s best remote software engineers. 100+ companies including those backed by Sequoia, Andreessen, Google Ventures, Benchmark, Founders Fund, Kleiner, Lightspeed, and Bessemer have hired Turing engineers. For more than 180,000 engineers across 140 countries, we are the preferred platform for finding remote U.S. software engineering roles. We offer a wide range of full-time remote opportunities for full-stack, backend, frontend, DevOps, mobile, and AI/ML engineers.We are growing fast (our revenue 15x’d in the past 12 months and is accelerating), and we have raised $14M in seed funding (one of the largest in Silicon Valley) from: Facebook’s 1st CTO and Quora’s Co-Founder (Adam D’Angelo) Executives from Google, Facebook, Square, Amazon, and Twitter Foundation Capital (investors in Uber, Netflix, Chegg, Lending Club, etc.) Cyan Banister Founder of Upwork (Beerud Sheth) We also raised a much larger round of funding in October 2020 that we will be publicly announcing over the coming month. Some articles about Turing: TechCrunch: Turing raises $14M seed to help source, vet, place, and manage remote developers The Information: Six Startups Prospering During Coronavirus Cyan Banister: Turing Helps the World Level Up Jonathan Siddharth (Turing CEO): The Future of Work is Remote. Turing is led by successful repeat founders Jonathan Siddharth and Vijay Krishnan, whose last A.I. company leveraged elite remote talent and had a successful acquisition. (Techcrunch story). Turing’s leadership team is composed of ex-Engineering and Sales leadership from Facebook, Google, Uber, and Capgemini. About the role: Software developers from all over the world have taken 200,000+ tests and interviews on Turing. Turing has also recommended thousands of developers to its customers and got customer feedback in terms of customer interview pass/fail data and data from the success of the collaboration with a U.S customer. This generates a massive proprietary dataset with a rich feature set comprising resume and test/interview features and labels in the form of actual customer feedback. Continuing rapid growth in our business creates an ever-increasing data advantage for us. We are looking for a Machine Learning Scientist who can help solve a whole range of exciting and valuable machine learning problems at Turing. Turing collects a lot of valuable heterogeneous signals about software developers including their resume, GitHub profile and associated code and a lot of fine-grained signals from Turing’s own screening tests and interviews (that span various areas including Computer Science fundamentals, project ownership and collaboration, communication skills, proactivity and tech stack skills), their history of successful collaboration with different companies on Turing, etc. A machine learning scientist at Turing will help create deep developer profiles that are a good representation of a developer’s strengths and weaknesses as it relates to their probability of getting successfully matched to one of Turing’s partner companies and having a fruitful long-term collaboration. The ML scientist will build models that are able to rank developers for different jobs based on their probability of success at the job. You will also help make Turing’s tests more efficient by assessing their ability to predict the probability of a successful match of a developer with at least one company. The prior probability of a registered developer getting matched with a customer is about 1%. We want our tests to adaptively reduce perplexity as steeply as possible and move this probability estimate rapidly toward either 0% or 100%; maximize expected information-gain per unit time in other words. As an ML Scientist on the team, you will have a unique opportunity to make an impact by advancing ML models and systems, as well as uncovering new opportunities to apply machine learning concepts to Turing product(s). This role will directly report to Turing’s founder and CTO, Vijay Krishnan. This is his Google Scholar profile. Responsibilities: Enhance our existing machine learning systems using your core coding skills and ML knowledge. Take end to end ownership of machine learning systems - from data pipelines, feature engineering, candidate extraction, model training, as well as integration into our production systems. Utilize state-of-the-art ML modeling techniques to predict user interactions and the direct impact on the company’s top-line metrics. Design features and builds large scale recommendation systems to improve targeting and engagement. Identify new opportunities to apply machine learning to different parts of our product(s) to drive value for our customers. Minimum Requirements: BS, MS, or Ph.D. in Computer Science or a relevant technical field (AI/ML preferred). Extensive experience building scalable machine learning systems and data-driven products working with cross-functional teams Expertise in machine learning fundamentals, applicable to search - Learning to Rank, Deep Learning, Tree-Based Models, Recommendation Systems, Relevance and Data mining, understanding of NLP approaches like W2V or Bert. 2+ years of experience applying machine learning methods in settings like recommender systems, search, user modeling, graph representation learning, natural language processing. Strong understanding of neural network/deep learning, feature engineering, feature selection, optimization algorithms. Proven ability to dig deep into practical problems and choose the right ML method to solve them. Strong programming skills in Python and fluency in data manipulation (SQL, Spark, Pandas) and machine learning (scikit-learn, XGBoost, Keras/Tensorflow) tools. Good understanding of mathematical foundations of machine learning algorithms. Ability to be available for meetings and communication during Turing's "coordination hours" (Mon - Fri: 8 am to 12 pm PST). Other Nice-to-have Requirements: First author publications in ICML, ICLR, NeurIPS, KDD, SIGIR, and related conferences/journals. Strong performance in Kaggle competitions. 5+ years of industry experience or a Ph.D. with 3+ years of industry experience in applied machine learning in similar problems e.g. ranking, recommendation, ads, etc. Strong communication skills. Experienced in leading large-scale multi-engineering projects. Flexible, and a positive team player with outstanding interpersonal skills.
Position: Data Engineer Location: Chennai- Guindy Industrial EstateDuration: Full time roleCompany: Mobile Programming (https://www.mobileprogramming.com/) Client Name: Samsung We are looking for a Data Engineer to join our growing team of analytics experts. The hire will beresponsible for expanding and optimizing our data and data pipeline architecture, as well as optimizingdata flow and collection for cross functional teams. The ideal candidate is an experienced data pipelinebuilder and data wrangler who enjoy optimizing data systems and building them from the ground up.The Data Engineer will support our software developers, database architects, data analysts and datascientists on data initiatives and will ensure optimal data delivery architecture is consistent throughoutongoing projects. They must be self-directed and comfortable supporting the data needs of multipleteams, systems and products.Responsibilities for Data Engineer Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes,optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of datafrom a wide variety of data sources using SQL and AWS big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customeracquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist withdata-related technical issues and support their data infrastructure needs. Create data tools for analytics and data scientist team members that assist them in building andoptimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems.Qualifications for Data Engineer Experience building and optimizing big data ETL pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, queryauthoring (SQL) as well as working familiarity with a variety of databases. Experience performing root cause analysis on internal and external data and processes toanswer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency andworkload management. A successful history of manipulating, processing and extracting value from large disconnecteddatasets. Working knowledge of message queuing, stream processing and highly scalable ‘big data’ datastores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment.We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who hasattained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Spark, Kafka, HBase, Hive etc. Experience with relational SQL and NoSQL databases Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.Skills: Big Data, AWS, Hive, Spark, Python, SQL
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
• Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them. • Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence). • Programming experience in Python. • Knowledge of machine learning frameworks like Tensorflow. • Experience with software version control systems like Github. • Understands the concept of Big Data like Hadoop, MongoDB, Apache Spark