Hi,
Enterprise Minds is looking for Data Architect for Pune Location.
Req Skills:
Python,Pyspark,Hadoop,Java,Scala
About EnterpriseMinds
Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance and growth.
Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalising go-to-market models and not just explore possibilities.
We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership and collaboration our people work in a spirit of co-creation, co-innovation and co-development to engineer next-generation software products with the help of accelerators.
Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions.
We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes.
We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We constantly self-asses and realign to work with each customer in the most impactful manner.
Similar jobs
Responsibilities
> Selecting features, building and optimizing classifiers using machine
> learning techniques
> Data mining using state-of-the-art methods
> Extending company’s data with third party sources of information when
> needed
> Enhancing data collection procedures to include information that is
> relevant for building analytic systems
> Processing, cleansing, and verifying the integrity of data used for
> analysis
> Doing ad-hoc analysis and presenting results in a clear manner
> Creating automated anomaly detection systems and constant tracking of
> its performance
Key Skills
> Hands-on experience of analysis tools like R, Advance Python
> Must Have Knowledge of statistical techniques and machine learning
> algorithms
> Artificial Intelligence
> Understanding of Text analysis- Natural Language processing (NLP)
> Knowledge on Google Cloud Platform
> Advanced Excel, PowerPoint skills
> Advanced communication (written and oral) and strong interpersonal
> skills
> Ability to work cross-culturally
> Good to have Deep Learning
> VBA and visualization tools like Tableau, PowerBI, Qliksense, Qlikview
> will be an added advantage
What are the Key Responsibilities:
- Design NLP applications
- Select appropriate annotated datasets for Supervised Learning methods
- Use effective text representations to transform natural language into useful features
- Find and implement the right algorithms and tools for NLP tasks
- Develop NLP systems according to requirements
- Train the developed model and run evaluation experiments
- Perform statistical analysis of results and refine models
- Extend ML libraries and frameworks to apply in NLP tasks
- Remain updated in the rapidly changing field of machine learning
What are we looking for:
- Proven experience as an NLP Engineer or similar role
- Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modeling
- Ability to effectively design software architecture
- Deep understanding of text representation techniques (such as n-grams, a bag of words, sentiment analysis etc), statistics and classification algorithms
- Knowledge of Python, Java, and R
- Ability to write robust and testable code
- Experience with machine learning frameworks (like Keras or PyTorch) and libraries (like sci-kit-learn)
- Strong communication skills
- An analytical mind with problem-solving abilities
- Degree in Computer Science, Mathematics, Computational Linguistics, or similar field
We are looking for a Big Data Engineer with java for Chennai Location
Location : Chennai
Exp : 11 to 15 Years
Job description
Required Skill:
1. Candidate should have minimum 7 years of experience as total
2. Candidate should have minimum 4 years of experience in Big Data design and development
3. Candidate should have experience in Java, Spark, Hive & Hadoop, Python
4. Candidate should have experience in any RDBMS.
Roles & Responsibility:
1. To create work plans, monitor and track the work schedule for on time delivery as per the defined quality standards.
2. To develop and guide the team members in enhancing their technical capabilities and increasing productivity.
3. To ensure process improvement and compliance in the assigned module, and participate in technical discussions or review.
4. To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalation
Regards,
Priyanka S
7P8R9I9Y4A0N8K8A7S7
- Python coding skills
- Scikit-learn, pandas, tensorflow/keras experience
- Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
- Implementing Machine learning models and pushing it to production
- Creating docker images for ML models, REST API creation in Python
- Additional Skills Compulsory:
- Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
- Additional Skills Compulsory:
- Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
PriceLabs (https://www.chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand" target="_blank">chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies.
We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.
Responsibilities:
- Design, develop and enhance our pricing algorithms to enable new capabilities.
- Process, analyze, model, and visualize findings from our market level supply and demand data.
- Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
- Take ownership of product ideas and design discussions.
- Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.
Requirements:
- Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
- Strong understanding of analysis of algorithms, data structures and statistics.
- Solid programming experience. Including being able to quickly prototype an idea and test it out.
- Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
- Experience with relational databases and strong knowledge of SQL.
- Experience building data heavy analytical models in the travel industry.
- Experience in the vacation rental industry.
- Experience developing dynamic pricing models.
- Prior experience working at a fast paced environment.
- Willingness to wear many hats.
we are looking for candidates who have good experiance with
BI/DW Experience of 3 - 6 years with Spark, Scala, SQL expertise
and Azure.
Azure background is needed.
* Spark hands on : Must have
* Scala hands on : Must have
* SQL expertise : Expert
* Azure background : Must have
* Python hands on : Good to have
* ADF, Data Bricks: Good to have
* Should be able to communicate effectively and deliver technology
implementation end to end
Looking for candidates who can join 15 to 30 Days and who will avaailable immeiate.
Regards
Gayatri P
Fragma Data Systems
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.
Client Name: Samsung
We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.
Responsibilities for Data Engineer
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Experience building and optimizing big data ETL pipelines, architectures and data sets.
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with big data tools: Spark, Kafka, HBase, Hive etc.
Experience with relational SQL and NoSQL databases
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Skills: Big Data, AWS, Hive, Spark, Python, SQL