11+ Text mining Jobs in Hyderabad | Text mining Job openings in Hyderabad
Apply to 11+ Text mining Jobs in Hyderabad on CutShort.io. Explore the latest Text mining Job opportunities across top companies like Google, Amazon & Adobe.
JOB TITLE - Product Development Engineer - Machine Learning
● Work Location: Hyderabad
● Full-time
Company Description
Phenom People is the leader in Talent Experience Marketing (TXM for short). We’re an early-stage startup on a mission to fundamentally transform how companies acquire talent. As a category creator, our goals are two-fold: to educate talent acquisition and HR leaders on the benefits of TXM and to help solve their recruiting pain points.
Job Responsibilities:
- Design and implement machine learning, information extraction, probabilistic matching algorithms and models
- Research and develop innovative, scalable and dynamic solutions to hard problems
- Work closely with Machine Learning Scientists (PhDs), ML engineers, data scientists and data engineers to address challenges head-on.
- Use the latest advances in NLP, data science and machine learning to enhance our products and create new experiences
- Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume
- Be a valued contributor in shaping the future of our products and services
- You will be part of our Data Science & Algorithms team and collaborate with product management and other team members
- Be part of a fast pace, fun-focused, agile team
Job Requirement:
- 4+ years of industry experience
- Ph.D./MS/B.Tech in computer science, information systems, or similar technical field
- Strong mathematics, statistics, and data analytics
- Solid coding and engineering skills preferably in Machine Learning (not mandatory)
- Proficient in Java, Python, and Scala
- Industry experience building and productionizing end-to-end systems
- Knowledge of Information Extraction, NLP algorithms coupled with Deep Learning
- Experience with data processing and storage frameworks like Hadoop, Spark, Kafka etc.
Position Summary
We’re looking for a Machine Learning Engineer to join our team of Phenom. We are expecting the below points to full fill this role.
- Capable of building accurate machine learning models is the main goal of a machine learning engineer
- Linear Algebra, Applied Statistics and Probability
- Building Data Models
- Strong knowledge of NLP
- Good understanding of multithreaded and object-oriented software development
- Mathematics, Mathematics and Mathematics
- Collaborate with Data Engineers to prepare data models required for machine learning models
- Collaborate with other product team members to apply state-of-the-art Ai methods that include dialogue systems, natural language processing, information retrieval and recommendation systems
- Build large-scale software systems and numerical computation topics
- Use predictive analytics and data mining to solve complex problems and drive business decisions
- Should be able to design the accurate ML end-to-end architecture including the data flows, algorithm scalability, and applicability
- Tackle situations where problem is unknown and the Solution is unknown
- Solve analytical problems, and effectively communicate methodologies and results to the customers
- Adept at translating business needs into technical requirements and translating data into actionable insights
- Work closely with internal stakeholders such as business teams, product managers, engineering teams, and customer success teams.
Benefits
- Competitive salary for a startup
- Gain experience rapidly
- Work directly with the executive team
- Fast-paced work environment
About Phenom People
At PhenomPeople, we believe candidates (Job seekers) are consumers. That’s why we’re bringing e-commerce experience to the job search, with a view to convert candidates into applicants. The Intelligent Career Site™ platform delivers the most relevant and personalized job search yet, with a career site optimized for mobile and desktop interfaces designed to integrate with any ATS, tailored content selection like Glassdoor reviews, YouTube videos and LinkedIn connections based on candidate search habits and an integrated real-time recruiting analytics dashboard.
Use Company career sites to reach candidates and encourage them to convert. The Intelligent Career Site™ offers a single platform to serve candidates a modern e-commerce experience from anywhere on the globe and on any device.
We track every visitor that comes to the Company career site. Through fingerprinting technology, candidates are tracked from the first visit and served jobs and content based on their location, click-stream, behavior on site, browser and device to give each visitor the most relevant experience.
Like consumers, candidates research companies and read reviews before they apply for a job. Through our understanding of the candidate journey, we are able to personalize their experience and deliver relevant content from sources such as corporate career sites, Glassdoor, YouTube and LinkedIn.
We give you clear visibility into the Company's candidate pipeline. By tracking up to 450 data points, we build profiles for every career site visitor based on their site visit behavior, social footprint and any other relevant data available on the open web.
Gain a better understanding of Company’s recruiting spending and where candidates convert or drop off from Company’s career site. The real-time analytics dashboard offers companies actionable insights on optimizing source spending and the candidate experience.
Kindly explore about the company phenom (https://www.phenom.com/">https://www.phenom.com/)
Youtube - https://www.youtube.com/c/PhenomPeople">https://www.youtube.com/c/PhenomPeople
LinkedIn - https://www.linkedin.com/company/phenompeople/">https://www.linkedin.com/company/phenompeople/
https://www.phenom.com/">Phenom | Talent Experience Management
Daily and monthly responsibilities
- Review and coordinate with business application teams on data delivery requirements.
- Develop estimation and proposed delivery schedules in coordination with development team.
- Develop sourcing and data delivery designs.
- Review data model, metadata and delivery criteria for solution.
- Review and coordinate with team on test criteria and performance of testing.
- Contribute to the design, development and completion of project deliverables.
- Complete in-depth data analysis and contribution to strategic efforts
- Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.
Basic Qualifications
- Bachelor’s degree.
- 5+ years of data analysis working with business data initiatives.
- Knowledge of Structured Query Language (SQL) and use in data access and analysis.
- Proficient in data management including data analytical capability.
- Excellent verbal and written communications also high attention to detail.
- Experience with Python.
- Presentation skills in demonstrating system design and data analysis solutions.
Airflow developer:
Exp: 5 to 10yrs & Relevant exp must be above 4 Years.
Work location: Hyderabad (Hybrid Model)
Job description:
· Experience in working on Airflow.
· Experience in SQL, Python, and Object-oriented programming.
· Experience in the data warehouse, database concepts, and ETL tools (Informatica, DataStage, Pentaho, etc.).
· Azure experience and exposure to Kubernetes.
· Experience in Azure data factory, Azure Databricks, and Snowflake.
Required Skills: Azure Databricks/Data Factory, Kubernetes/Dockers, DAG Development, Hands-on Python coding.
at Persistent Systems
Location: Pune/Nagpur,Goa,Hyderabad/
Job Requirements:
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
at Virtusa
- Minimum 1 years of relevant experience, in PySpark (mandatory)
- Hands on experience in development, test, deploy, maintain and improving data integration pipeline in AWS cloud environment is added plus
- Ability to play lead role and independently manage 3-5 member of Pyspark development team
- EMR ,Python and PYspark mandate.
- Knowledge and awareness working with AWS Cloud technologies like Apache Spark, , Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS
We are hiring for software engineer Minimum 1 year experience engineering graduate from Mechanical/EEE/EC/CS stream
- Primary role will be helping our customers to assist in development requirement in image processing
- It will also involves development technical specifications and product description in image processing field.
You will get to work on new and disruptive technologies.
Key Skills:
Familiarization with basic Linux commands *Hands on experience in image processing application development based on Deep Neural Networks, open cv etc
*Experience in working with Python, R, Tensorflow and C/C++
Job Location : Hyderabad
Resumes to be sent to Ogive mail id
at R&D Company
Job Title: Chief Engineer: Deep Learning Compiler Expert
You will collaborate with experts in machine learning, algorithms and software to lead our effort of deploying machine learning models onto Samsung Mobile AI platform.
In this position, you will contribute, develop and enhance our compiler infrastructure for high-performance by using open-source technology like MLIR, LLVM, TVM and IREE.
Necessary Skills / Attributes:
- 6 to 15 years of experience in the field of compiler design and graph mapping.
- 2+ years hands-on experience with MLIR and/or LLVM.
- Experience with multiple toolchains, compilers, and Instruction Set Architectures.
- Strong knowledge of resource management, scheduling, code generation, and compute graph optimization.
- Strong expertise in writing modern standards (C++17 or newer) C++ production quality code along test-driven development principles.
- Comfortable and experienced in software development life cycle - coding, debugging, optimization, testing, and continuous integration.
- Familiarity with parallelization techniques for ML acceleration.
- Experience working on and contributing to an active compiler toolchain codebase, such as LLVM, MLIR, or Glow.
- Experience in deep learning algorithms and techniques, e.g., convolutional neural networks, recurrent networks, etc.
- Experience of developing in a mainstream machine-learning framework, e.g. PyTorch, Tensorflow or Caffe.
- Experience operating in a fast-moving environment where the workloads evolve at a rapid pace.
- Understanding of the interplay of hardware and software architectures on future algorithms, programming models and applications.
- Experience developing innovative architectures to extend the state of the art in DL performance and efficiency.
- Experience with Hardware and Software Co-design.
M.S. or higher degree, in CS/CE/EE or equivalent with industry or open-source experience.
Work Profile:
- Design, implement and test compiler features and capabilities related to infrastructure and compiler passes.
- Ingest CNN graphs in Pytorch/TF/TFLite/ONNX format and map them to hardware implementations, model data-flows, create resource utilization cost-benefit analysis and estimate silicon performance.
- Develop graph compiler optimizations (operator fusion, layout optimization, etc) that are customized to each of the different ML accelerators in the system.
- Integrate open-source and vendor compiler technology into Samsung ML internal compiler infrastructure.
- Collaborate with Samsung ML acceleration platform engineers to guide the direction of inferencing and provide requirements and feature requests for hardware vendors.
- Closely follow industry and academic developments in the ML compiler domain and provide performance guidelines and standard methodologies for other ML engineers.
- Create and optimize compiler backend to leverage the full hardware potential, efficiently optimizing them using novel approaches.
- Evaluate code performance, debug, diagnose and drive resolution of compiler and cross-disciplinary system issues.
- Contribute to the development of machine-learning libraries, intermediate representations, export formats and analysis tools.
- Communicate and collaborate effectively with cross-functional hardware and software engineering teams.
- Champion engineering and operational excellence, establishing metrics and processes for regular assessment and improvement.
Keywords to source candidates
Senior Developer, Deep Learning, Prediction engine, Machine Learning, Compiler
Responsibilities:
- The Machine & Deep Machine Learning Software Engineer (Expertise in Computer Vision) will be an early member of a growing team with responsibilities for designing and developing highly scalable machine learning solutions that impact many areas of our business.
- The individual in this role will help in the design and development of Neural Network (especially Convolution Neural Networks) & ML solutions based on our reference architecture which is underpinned by big data & cloud technology, micro-service architecture and high performing compute infrastructure.
- Typical daily activities include contributing to all phases of algorithm development including ideation, prototyping, design, and development production implementation.
Required Skills:
- An ideal candidate will have a background in software engineering and data science with expertise in machine learning algorithms, statistical analysis tools, and distributed systems.
- Experience in building machine learning applications, and broad knowledge of machine learning APIs, tools, and open-source libraries
- Strong coding skills and fundamentals in data structures, predictive modeling, and big data concepts
- Experience in designing full stack ML solutions in a distributed computing environment
- Experience working with Python, Tensor Flow, Kera’s, Sci-kit, pandas, NumPy, AZURE, AWS GPU
- Excellent communication skills with multiple levels of the organization
- Image CNN, Image processing, MRCNN, FRCNN experience is a must.
at Bigdatamatica Solutions Pvt Ltd
Top MNC looking for candidates on Business Analytics(4-8 Years Experience).
Requirement :
- Experience in metric development & Business analytics
- High Data Skill Proficiency/Statistical Skills
- Tools: R, SQL, Python, Advanced Excel
- Good verbal/communication Skills
- Supply Chain domain knowledge
*Job Summary*
Duration: 6months contract based at Hyderabad
Availability: 1 week/Immediate
Qualification: Graduate/PG from Reputed University
*Key Skills*
R, SQL, Advanced Excel, Python
*Required Experience and Qualifications*
5 to 8 years of Business Analytics experience.