Zyoin is India Leading Recruitment Solution provider, with experience of working with more than 300 IT based clients.
Job Responsibilities Design machine learning systems Research and implement appropriate ML algorithms and tools Develop machine learning applications according to requirements Select appropriate datasets and data representation methods Run machine learning tests and experiments Perform statistical analysis and fine-tuning using test results Train and retrain systems when necessary Requirements for the Job Bachelor’s/Master's/PhD in Computer Science, Mathematics, Statistics or equivalent field andmust have a minimum of 2 years of overall experience in tier one colleges Minimum 1 year of experience working as a Data Scientist in deploying ML at scale in production Experience in machine learning techniques (e.g. NLP, Computer Vision, BERT, LSTM etc..) andframeworks (e.g. TensorFlow, PyTorch, Scikit-learn, etc.) Working knowledge in deployment of Python systems (using Flask, Tensorflow Serving) Previous experience in following areas will be preferred: Natural Language Processing(NLP) - Using LSTM and BERT; chatbots or dialogue systems, machine translation, comprehension of text, text summarization. Computer Vision - Deep Neural Networks/CNNs for object detection and image classification, transfer learning pipeline and object detection/instance segmentation (Mask R-CNN, Yolo, SSD).
Job Title: Hadoop DeveloperWork Location: HyderabadExperience:7+yrsPackage:Upto 15 LPANotice Period:Immediate to 15 daysIts a Full Time Opportunity with Our ClientMandatory Skills:Big Data,Hadoop & Spark/Scala/Hive/Pig/SQOOP/OOZIEJob Description:--Overall 7 years of experience and at least 5 year of experience in Big Data space.--Hadoop Developers with Spark concepts, Scala programming and HIVE. Should be able to explain Tuples, Data frames etc.--Strong Hadoop –Spark/Scala/Hive/Pig/SQOOP/OOZIE-MUST--Good exposure to Kafka.(preferred)--Good exposure on Java (preferred)--Complex High Volume High Velocity projects end to end delivery experience--Good experience with at least one of the scripting language like Scala, Python.--Good exposure to Big-Data architectures.--Experience with some framework building experience on Hadoop--Very good understanding of Big Data eco system--Experience with sizing and estimating large scale big data projects--Good with DB knowledge with SQL tuning experience.
Required Skills: Proficient with Scala language (Akka HTTP, actors, streams) or Python language Proficient with Spark & writing MapReduce applications Experience building and optimizing ETL processes, data pipelines, architectures Experience with implementation of any data warehouse (Snowflake, BigQuery, Redshift) Preferred: Experience with any Graph DB NoSQL databases like Mongodb or ElasticSeach
SpringML is looking to hire a top-notch Senior Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice. RESPONSIBILITIES: Ability to work as a member of a team assigned to design and implement data integration solutions. Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions. Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions. Propose design solutions and recommend best practices for large scale data analysis SKILLS: B.tech degree in computer science, mathematics or other relevant fields. 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines. Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C. Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka, Experience with Agile implementation methodologies
Job roles and responsibilities: Minimum 3 to 4 years hands-on designing, building and operationalizing large-scale enterprise data solutions and applications using GCP data and analytics services like, Cloud DataProc, Cloud Dataflow, Cloud BigQuery, Cloud PubSub, Cloud Functions. Hands-on experience in analyzing, re-architecting and re-platforming on-premise data warehouses to data platforms on GCP cloud using GCP/3rd party services. Experience in designing and building data pipelines within a hybrid big data architecture using Java, Python, Scala & GCP Native tools. Hands-on Orchestrating and scheduling Data pipelines using Composer, Airflow. Experience in performing detail assessments of current state data platforms and creating an appropriate transition path to GCP cloud Technical Skills Required: Strong Experience in GCP data and Analytics Services Working knowledge on Big data ecosystem-Hadoop, Spark, Hbase, Hive, Scala etc Experience in building and optimizing data pipelines in Spark Strong skills in Orchestration of workflows with Composer/Apache Airflow Good knowledge on object-oriented scripting languages: Python (must have) and Java or C++. Good to have knowledge in building CI/CD pipelines with GCP Cloud Build and native GCP services
3+ years of experience in Machine Learning Bachelors/Masters in Computer Engineering/Science. Bachelors/Masters in Engineering/Mathematics/Statistics with sound knowledge of programming and computer concepts. 10 and 12th acedemics 70 % & above. Skills : - Strong Python/ programming skills - Good conceptual understanding of Machine Learning/Deep Learning/Natural Language Processing - Strong verbal and written communication skills. - Should be able to manage team, meet project deadlines and interface with clients. - Should be able to work across different domains and quickly ramp up the business processes & flows & translate business problems into the data solutions
We’re building our Engineering team at Assembly, a fast-growing start-up based in Los Angeles and Bangalore! Our recognition and rewards platform offers organizations a better way to engage employees and drive organizational culture. About you We are looking for our very first Data Scientist to join the team and contribute to the evolving Assembly product offering. You will be responsible for building state of the art ML models to improve product usage to our customers. You are passionate about company culture. You love challenging yourself to constantly improve, and you share your knowledge to empower others. You are self-directed and scrappy, able to solve problems effectively without compromising the product. You look beyond the surface to understand the root causes so that you can build long-term solutions for the whole ecosystem. And finally, you enjoy being a part of a small but mighty team with a mission of changing the way companies engage their employees! Job Description We are looking for a data scientist that will help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products. Some early projects you will work on and not limited to: Build a system for automated fraud detection and analyze sentences to give value to our customers. Responsibilities Selecting features, building and optimizing classifiers using machine learning techniques Data mining using state-of-the-art methods Extending the company’s data with third party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Requirements Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests. Experience with common data science toolkits, such as Notebook, Sagemaker, Weka, NumPy, Scikit. Excellence in at least one of these is highly desirable Great communication skills Experience on state of the art models Roberta, GPT etc is a huge plus Good applied statistics skills, such as distributions, statistical testing, regression, etc. Data-oriented personality
Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
A research lab with roots laid in innovation, we are looking for someone who can take reins of our AI based development think tank. Given the work ethics and results, the salary can be re-negotiated in 5 months.
We aim to transform recruiting industry.