We are Still Hiring!!!
Dear Candidate,
This email is regarding open positions for Data Engineer Professionals with our organisation CRMNext.
In case, you find the company profile and JD matching your aspirations and your profile matches the required Skill and qualifications criteria, please share your updated resume with response to questions.
We shall reach you back for scheduling the interviews post this.
About Company:
Driven by a Passion for Excellence
Acidaes Solutions Pvt. Ltd. is a fast growing specialist Customer Relationship Management (CRM) product IT company providing ultra-scalable CRM solutions. It offers CRMNEXT, our flagship and award winning CRM platform to leading enterprises both on cloud as well as on-premise models. We consistently focus on using the state of art technology solutions to provide leading product capabilities to our customers.
CRMNEXT is a global cloud CRM solution provider credited with the world's largest installation ever. From Fortune 500 to start-ups, businesses across nine verticals have built profitable customer relationships via CRMNEXT. A pioneer of Digital CRM for some of the largest enterprises across Asia-Pacific, CRMNEXT's customers include global brands like Pfizer, HDFC Bank, ICICI Bank, Axis Bank, Tata AIA, Reliance, National Bank of Oman, Pavers England etc. It was recently lauded in the Gartner Magic Quadrant 2015 for Lead management, Sales Force Automation and Customer Engagement. For more information, visit us at www.crmnext.com
Educational Qualification:
B.E./B.Tech /M.E./ M.Tech/ MCA with (Bsc.IT/Bsc. Comp/BCA is mandatory)
60% in Xth, XIIth /diploma, B.E./B.Tech/M.E/M.Tech/ MCA with (Bsc.IT/Bsc. Comp/BCA is mandatory)
All education should be regular (Please Note - Degrees through Distance learning/correspondence will not consider)
Exp level- 2 to 5 yrs
Location-Andheri (Mumbai)
Technical expertise required:
1)Analytics experience in the BFSI domain is must
2) Hands on technical experience in python, big data and AI
3) Understanding of datamodels and analytical concepts
4) Client engagement :
Should have run in past client engagements for Big data/ AI projects starting from requirement gathering, to planning development sprints, and delivery
Should have experience in deploying big data and AI projects
First hand experience on data governance, data quality, customer data models, industry data models
Aware of SDLC.
Regards,
Deepak Sharma
HR Team
About CRMnext
Similar jobs
Job Title: SQL Query Writer - Analytics Automation
Location: Thane (West), Mumbai
Experience: 4-5 years
Responsibilities:
- Develop and optimize SQL queries for efficient data retrieval and analysis.
- Automate analytics processes using platforms like SQL, Python, ACL, Alteryx, Analyzer, Excel Macros, and Access Query.
- Collaborate with cross-functional teams to understand analytical requirements and provide effective solutions.
- Ensure data accuracy, integrity, and security in automated processes.
- Troubleshoot and resolve issues related to analytics automation.
Qualifications:
- Minimum 3 years of experience in SQL query writing and analytics automation.
- Proficiency in SQL, Python, ACL, Alteryx, Analyzer, Excel Macros, and Access Query.
- Strong analytical skills and attention to detail.
Key Responsibilities
- Owning and executing distinct work streams within larger analytics engagement.
- Delivering insights based on complex data analysis, within relevant verticals (insurance, health care, banking, etc.)
- Hands-on experience in data manipulation skills in SQL or Python
- Experience in exploratory data analysis and feature engineering
- Must have strong capabilities in problem-solving, managing own work diligently, thoroughly documenting own work, succinctly communicating analysis process and outcomes, as well as effectively working with clients
- Basic understanding of at least one business area and its components (Healthcare, Insurance, Banking, Telecommunications, Logistics)
- Familiarity with / Exposure on cloud engineering
- Ability to understand business requirements and support the lead in solving problems
- Strong verbal and written communications skills
- Actively seeks information to clarify customer needs to deliver a better experience
- Acts promptly to ensure customer needs are fulfilled
This profile will include the following responsibilities:
- Develop Parsers for XML and JSON Data sources/feeds
- Write Automation Scripts for product development
- Build API Integrations for 3rd Party product integration
- Perform Data Analysis
- Research on Machine learning algorithms
- Understand AWS cloud architecture and work with 3 party vendors for deployments
- Resolve issues in AWS environmentWe are looking for candidates with:
Qualification: BE/BTech/Bsc-IT/MCA
Programming Language: Python
Web Development: Basic understanding of Web Development. Working knowledge of Python Flask is desirable
Database & Platform: AWS/Docker/MySQL/MongoDB
Basic Understanding of Machine Learning Models & AWS Fundamentals is recommended.
We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.
At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.
This position requires travel to Norway.
Elop is sister company of Simplifai and co-located together in all geographic locations.
Roles and Responsibilities
- Define technical scope and objectives through research and participation in requirements gathering and definition of processes
- Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
- Realtime data feed processing using Big Data ecosystem
- Design, review, implement and optimize data transformation processes in Big Data ecosystem
- Test and prototype new data integration/processing tools, techniques and methodologies
- Conversion of MATLAB code into Python/C/C++.
- Participate in overall test planning for the application integrations, functional areas and projects.
- Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.
Desired Candidate Profile
- Bachelor's degree in Statistics, Computer or equivalent
- 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
- 7+ years of hands-on experience in Python/Scala is a must.
- Experience in architecting the big data application is needed.
- Excellent analytical and problem solving skills
- Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
- Experience with signal processing is plus.
- Experience in working on client server architecture is plus.
- Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
- Good communication skills, written and oral, in English
We can Offer
- An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
- Be a part of companys research and Development team to create unique and innovative products
- Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
- Good working environment with skilled and committed colleagues an organization with short decision paths.
- Professional challenges and development
Must have experience on e-commerce projects
About the Company:
This opportunity is for an AI Drone Technology startup funded by the Indian Army. It is working to develop cutting-edge products to help the Indian Army gain an edge in New Age Enemy Warfare.
They are working on using drones to neutralize terrorists hidden in deep forests. Get a chance to contribute to secure our borders against the enemy.
Responsibilities:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures/architectures
- Proficiency in at least one major machine learning framework such as Tensorflow, Pytorch
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Highly skilled in Python scripting language
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
Educational Qualification:
MS in Engineering, Applied Mathematics, Data Science, Computer Science or equivalent field, with 3 years industry experience, a PhD degree or equivalent industry experience.
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
About Graphene
Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best
Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.
Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.
Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.
Job title: - Data Analyst
Job Description
Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines.
Responsibilities and Duties
- Managing end to end data pipeline from data source to visualization layer
- Ensure data integrity; Ability to pre-empt data errors
- Organized managing and storage of data
- Provide quality assurance of data, working with quality assurance analysts if necessary.
- Commissioning and decommissioning of data sets.
- Processing confidential data and information according to guidelines.
- Helping develop reports and analysis.
- Troubleshooting the reporting database environment and reports.
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Supporting the data warehouse in identifying and revising reporting requirements.
- Supporting initiatives for data integrity and normalization.
- Evaluating changes and updates to source production systems.
- Training end-users on new reports and dashboards.
- Initiate data gathering based on data requirements
- Analyse the raw data to check if the requirement is satisfied
Qualifications and Skills
- Technologies required: Python, SQL/ No-SQL database(CosmosDB)
- Experience required 2 – 5 Years. Experience in Data Analysis using Python
• Understanding of software development life cycle
- Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI)
- Automation steps needed to transform and enrich data.
- Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.
- Excellent verbal and written communication skills
Job Description
We are looking for applicants who have a demonstrated research background in machine learning, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research. The candidate will collaborate with researchers and engineers of multiple disciplines within Ideapoke, in particular with researchers in data collection and development teams to develop advanced data analytics solutions. Work with massive amounts of data collected from various sources.-4 to 5 years of academic or professional experience in Artificial Intelligence and Data Analytics, Machine Learning, Natural Language Processing/Text mining or related field.
-Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL