WE ARE GRAPHENE
Graphene is an award-winning AI company, developing customized insights and data solutions for corporate clients. With a focus on healthcare, consumer goods and financial services, our proprietary AI platform is disrupting market research with an approach that allows us to get into the mind of customers to a degree unprecedented in traditional market research.
Graphene was founded by corporate leaders from Microsoft and P&G and works closely with the Singapore Government & universities in creating cutting edge technology. We are gaining traction with many Fortune 500 companies globally.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the few start-ups which are self-funded, yet profitable and debt free.
We already have a strong bench strength of leaders in place. Now, we are looking to groom more talents for our expansion into the US. Join us and take both our growths to the next level!
WHAT WILL THE ENGINEER-ML DO?
- Primary Purpose: As part of a highly productive and creative AI (NLP) analytics team, optimize algorithms/models for performance and scalability, engineer & implement machine learning algorithms into services and pipelines to be consumed at web-scale
- Daily Grind: Interface with data scientists, project managers, and the engineering team to achieve sprint goals on the product roadmap, and ensure healthy models, endpoints, CI/CD,
- Career Progression: Senior ML Engineer, ML Architect
YOU CAN EXPECT TO
- Work in a product-development team capable of independently authoring software products.
- Guide junior programmers, set up the architecture, and follow modular development approaches.
- Design and develop code which is well documented.
- Optimize of the application for maximum speed and scalability
- Adhere to the best Information security and Devops practices.
- Research and develop new approaches to problems.
- Design and implement schemas and databases with respect to the AI application
- Cross-pollinated with other teams.
HARD AND SOFT SKILLS
Must Have
- Problem-solving abilities
- Extremely strong programming background – data structures and algorithm
- Advanced Machine Learning: TensorFlow, Keras
- Python, spaCy, NLTK, Word2Vec, Graph databases, Knowledge-graph, BERT (derived models), Hyperparameter tuning
- Experience with OOPs and design patterns
- Exposure to RDBMS/NoSQL
- Test Driven Development Methodology
Good to Have
- Working in cloud-native environments (preferably Azure)
- Microservices
- Enterprise Design Patterns
- Microservices Architecture
- Distributed Systems
About Graphene Services Pte Ltd
Similar jobs
We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.
• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines
Qualifications and Experience:
• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
We have an urgent requirements of Big Data Developer profiles in our reputed MNC company.
Location: Pune/Bangalore/Hyderabad/Nagpur
Experience: 4-9yrs
Skills: Pyspark,AWS
or Spark,Scala,AWS
or Python Aws
Designing and developing NLP applications
Using effective text representation techniques and classification algorithms
Proven experience as an NLP Engineer or similar role
Understanding of NLP techniques for text representation, semantic extraction techniques, data structures and modeling
Working knowledge on pretrained NLP Models (ULMFiT, Transformer, BERT etc)
Ability to effectively design software architecture
Deep understanding of text representation techniques (such as n-grams, bag of words, sentiment analysis etc), statistics and classification algorithms
Proficiency with a deep learning framework such as TensorFlow, Keras or PyTorch) and libraries (like scikit-learn, Pandas and Numpy)
Proficiency with Python and basic libraries for machine learning.
Proficiency with IDEs – Jupyter Notebook, Spyder, Anaconda environments.
Familiarity with Linux (Centos and Ubuntu)
Ability to select hardware to run an ML model with the required latency
Excellent communication skills
Ability to work in a team
Outstanding analytical and problem-solving skills
Develop machine learning pipeline
Select appropriate datasets and data representation methods
Run machine learning tests and experiments
Perform statistical analysis and fine-tuning using test results
Train and retrain systems when necessary
Experience with Git based version control
Extend existing ML libraries and frameworks
Keep abreast of developments in the field
Research and implement MLOps tools, frameworks and platforms for our Data Science projects.
Company Name: Intraedge Technologies Ltd (https://intraedge.com/" target="_blank">https://intraedge.com/)
Type: Permanent, Full time
Location: Any
A Bachelor’s degree in computer science, computer engineering, other technical discipline, or equivalent work experience
- 4+ years of software development experience
- 4+ years exp in programming languages- Python, spark, Scala, Hadoop, hive
- Demonstrated experience with Agile or other rapid application development methods
- Demonstrated experience with object-oriented design and coding.
Please mail you rresume to poornimakattherateintraedgedotcomalong with NP, how soon can you join, ECTC, Availability for interview, Location
Delivery Manager
Job Description
- Review customer orders and plan and coordinate in the execution of projects and manage the client accounts
- Develop scope and budget for projects
- Ability to understand all technical aspects of the project and its requirements, articulate and communicate the same to internal stakeholders
- Work with the Presales team to define the technical specification for features and functionalities and also determine the effort associated
- Hands-on experience in creating SDD, SRS, Gantt Charts, etc.
- Work closely with Engineering, Solutioning and Platform teams during requirement gathering and documentation phase to understand establish the scope of development work in projects
- Provide suggestions on implementation approach, limitations/complexity around implementation with respect to the platform used, and recommendations for alternative solutions
- Perform resource allocations and workload assignments according to project requirements.
- Report project status to customers and develop required project documentation.
- Serve as primary contact across all the projects being handled and concerns in assigned accounts
Must have skills:
- 8+ years of experience leading and delivering projects to high standards and managing high-value accounts
- Basic understanding of application development technologies like Python, ML, API Integration, etc.
- Good understanding of server/storage configuration, API Integration, Cloud deployment, and configuration
- Should have experience working with large government clients and/or large enterprises in BFSI, eCommerce, Healthcare, Retail, and other such verticals
- Proven track record of building positive and productive working relationships with customers for business growth
- Ability to analyze and troubleshoot issues in a timely fashion
- Ability to identify process improvements to achieve cost-effectiveness and time-saving
- Proven ability to operate with authority and take critical business decisions to meet customer expectations.
- Should have exceptional communication skills (verbal and written) in English
Essential Personal Attributes:
- Must be a strong relationship builder with experience within managing all stakeholders
- Interest in emerging technologies and how they can be applied to drive business outcomes
- Demonstrated commercial and business focus
- Negotiation and influencing skills utilizing a consultative approach
- Ability to multitask and prioritize work to meet timeframes
- Ability to take ownership of tasks as allocated and raise issues or request resources as appropriate
- Ability to communicate technical information to non-technical colleagues and clients.
- Excellent stakeholder management and reporting skills
- Must be able to translate technical environments into business language
- Strong commercial acumen
- Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
- Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
- Experience Apache Spark Programming with Databricks
- Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
- Hands on experience with leveraging CI/CD to rapidly build & test application code
- Expertise in Data governance and Data Quality
- Experience working with PCI Data and working with data scientists is a plus
- At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
- 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
- Python coding skills
- Scikit-learn, pandas, tensorflow/keras experience
- Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
- Implementing Machine learning models and pushing it to production
- Creating docker images for ML models, REST API creation in Python
- Additional Skills Compulsory:
- Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
- Additional Skills Compulsory:
- Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
2) Fix appointments and conduct Online demo sessions on daily basis including follow up session
3) Understand Customer profile & problems
4) Create the need for Smart Learning and advise student-parent to enrol into SP Robotic Works
courses as solution
5) Handle Objections and Price Negotiation to generate Sales Revenue
6) Learn/ Upgrade ones own Product Knowledge and Sales Skills to achieve and exceed growing Sales
target (s)