Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io
Bengaluru (Bangalore)
2 - 4 years
{{::renderSalaryString({min: 300000, max: 400000, duration: '', currency: 'INR', equity: false})}}

Skills

Big Data
Python
Elastic Search
Hadoop
Spark
Apache Kafka

Job description

About Cognologix Technologies

Cognologix is a technology and business development firm with a focus on emerging decentralized business models and innovative technologies related to Blockchain, Machine Learning, Conversational Bots, Big Data and Search. We help enterprises to disrupt – both large and medium – by re-imagining their business models and innovate like a start-up. The Cognologix team excels at the ideation, architecture, prototyping and development of cutting edge products.

Founded

2016

Type

Product

Size

6-50 employees

Stage

View company

Why apply to jobs via CutShort

No long forms
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Discover employers in your network
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Make your network count
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
{{2101133 | number}}
Matches delivered
{{3712187 | number}}
Network size
{{6212 | number}}
Companies hiring

Similar jobs

Data Scientist

Founded 2020
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Job Description:    We are seeking passionate engineers experienced in software development using Machine Learning (ML) and Natural Language Processing (NLP) techniques to join our development team in Bangalore, India. We're a fast-growing startup working on an enterprise product - An intelligent data extraction Platform for various types of documents.    Your responsibilities:    • Build, improve and extend NLP capabilities  • Research and evaluate different approaches to NLP problems  • Must be able to write code that is well designed, produce deliverable results  • Write code that scales and can be deployed to production    You must have:    • Fundamentals of statistical methods is a must  • Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM  • A solid foundation in Python, data structures, algorithms, and general software development skills.  • Ability to apply machine learning to problems that deal with language  • Engineering ability to build robustly scalable pipelines  • Ability to work in a multi-disciplinary team with a strong product focus

Job posted by
apply for job
apply for job
Naveen Taalanki picture
Naveen Taalanki
Job posted by
Naveen Taalanki picture
Naveen Taalanki
Apply for job
apply for job

Senior Data Engineer

Founded 2005
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1600000, max: 4000000, duration: "undefined", currency: "INR", equity: false})}}

Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem.   What are we looking for: 3+  years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and experience in working with Big Data technologies like Big Data frameworks/platforms/data stores like Hadoop, HDFS, Spark, Oozie, Hue, EMR, Scala, Hive, Glue, Kerberos etc. Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred Knowledge of statistical analysis tools like R, SAS etc  Familiarity with any data visualization software A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with As a data engineer at Recko, you will: Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems.   About Recko:  Recko was founded in 2017 to organise the world’s transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. With the Recko Platform, businesses can build, integrate and adapt innovative and complex financial use cases within the organization and  across external payment ecosystems with agility, confidence and at scale.  . Today, customer-obsessed brands such as Deliveroo, Meesho, Grofers, Dunzo, Acommerce, etc use Recko so their finance teams can optimize resources with automation and prioritize growth over repetitive and time-consuming tasks around day-to-day operations.    Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We believe software is an extension of one’s capability, and it should be delightful and fun to use.   Working at Recko:  We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 60+ members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals.

Job posted by
apply for job
apply for job
Chandrakala M picture
Chandrakala M
Job posted by
Chandrakala M picture
Chandrakala M
Apply for job
apply for job

Data Scientist

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai, Bengaluru (Bangalore)
Experience icon
2 - 4 years
Salary icon
Best in industryBest in industry

Responsibility Partnering with internal business owners (product, marketing, edit, etc.) to understand needs and develop custom analysis to optimize for user engagement and retention Good understanding of the underlying business and workings of cross functional teams for successful execution Design and develop analyses based on business requirement needs and challenges. Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, etc. Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required. Identify and use appropriate investigative and analytical technologies to interpret and verify results. Apply and learn a wide variety of tools and languages to achieve results Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs. Requirements 2 - 4 years  of relevant experience in Data science. Preferred education: Bachelor's degree in a technical field or equivalent experience. Experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms. Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, XGBoost, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, boosting and ensemble methods. Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools. Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis. Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts. Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE). Experience in risk and credit score domains preferred.

Job posted by
apply for job
apply for job
Poornima B picture
Poornima B
Job posted by
Poornima B picture
Poornima B
Apply for job
apply for job

Data Architect

Founded 1995
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 700000, duration: "undefined", currency: "INR", equity: false})}}

Database Architect 5 - 6 Years Good Knowledge in Relation and Non-Relational Database To write Complex Queries and Identify problematic queries and provide a Solution Good Hands on database tools Experience in Both SQL and NON SQL Database like SQL Server, PostgreSQL, Mango DB, Maria DB. Etc. Worked on Data Model Preparation & Structuring Database etc.

Job posted by
apply for job
apply for job
Jayaraj E picture
Jayaraj E
Job posted by
Jayaraj E picture
Jayaraj E
Apply for job
apply for job

Data Science Engineer

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Job Title : DataScience EngineerWork Location : ChennaiExperience Level : 5+yrsPackage : Upto 18 LPANotice Period : Immediate JoinersIt's a full-time opportunity with our client.Mandatory Skills:Machine Learning,Python,Tableau & SQLJob Requirements:--2+ years of industry experience in predictive modeling, data science, and Analysis.--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.--Experience writing code in Python and SQL with documentation for reproducibility.--Strong Proficiency in Tableau.--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.--AWS Sagemaker experience is a plus not required.

Job posted by
apply for job
apply for job
Venkat B picture
Venkat B
Job posted by
Venkat B picture
Venkat B
Apply for job
apply for job

Hadoop Admin

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1600000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities     - Responsible for implementation and ongoing administration of Hadoopinfrastructure.     - Aligning with the systems engineering team to propose and deploy newhardware and software environments required for Hadoop and to expand existingenvironments.     - Working with data delivery teams to setup new Hadoop users. This job includessetting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pigand MapReduce access for the new users.     - Cluster maintenance as well as creation and removal of nodes using tools likeGanglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools     - Performance tuning of Hadoop clusters and Hadoop MapReduce routines     - Screen Hadoop cluster job performances and capacity planning     - Monitor Hadoop cluster connectivity and security     - Manage and review Hadoop log files.     - File system management and monitoring.     - Diligently teaming with the infrastructure, network, database, application andbusiness intelligence teams to guarantee high data quality and availability     - Collaboration with application teams to install operating system and Hadoopupdates, patches, version upgrades when required.    READ MORE OF THE JOB DESCRIPTION QualificationsQualifications     - Bachelors Degree in Information Technology, Computer Science or otherrelevant fields     - General operational expertise such as good troubleshooting skills,understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,storage, and networks.     - Hadoop skills like HBase, Hive, Pig, Mahout     - Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,monitor critical parts of the cluster, configure name node high availability, scheduleand configure it and take backups.     - Good knowledge of Linux as Hadoop runs on Linux.     - Familiarity with open source configuration management and deployment toolssuch as Puppet or Chef and Linux scripting.     Nice to Have     - Knowledge of Troubleshooting Core Java Applications is a plus.

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Data Engineer• Drive the data engineering implementation• Strong experience in building data pipelines• AWS stack experience is must• Deliver Conceptual, Logical and Physical data models for the implementationteams.• SQL stronghold is must. Advanced SQL working knowledge and experienceworking with a variety of relational databases, SQL query authoring• AWS Cloud data pipeline experience is must. Data pipelines and data centricapplications using distributed storage platforms like S3 and distributed processingplatforms like Spark, Airflow, Kafka• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,Elasticsearch• Ability to use a major programming (e.g. Python /Java) to process data formodelling.

Job posted by
apply for job
apply for job
geeti gaurav mohanty picture
geeti gaurav mohanty
Job posted by
geeti gaurav mohanty picture
geeti gaurav mohanty
Apply for job
apply for job

Python Developer

Founded 2005
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 900000, duration: "undefined", currency: "INR", equity: false})}}

Position Name: Software Developer Required Experience: 3+ Years Number of positions: 4 Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science). Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting Compensation - As per industry standards. Joining - Immediate joining is preferrable.   Required Skills:   Strong Experience in Python and web frameworks like Django, Tornado and/or Flask Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift. Experience in building APIs, Multi-threading for tasks on Linux platform Exposure to finance and capital markets will be added advantage.  Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts. Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application. Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts. Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3   Company Description: Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.   Website: www.virtuaresearch.com

Job posted by
apply for job
apply for job
Jyoti Nair picture
Jyoti Nair
Job posted by
Jyoti Nair picture
Jyoti Nair
Apply for job
apply for job

Senior Artificial Intelligence/Machine Learning Engineer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years

Job posted by
apply for job
apply for job
Layak Singh picture
Layak Singh
Job posted by
Layak Singh picture
Layak Singh
Apply for job
apply for job

ETL Talend developer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 19 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Job posted by
apply for job
apply for job
Shobha B K picture
Shobha B K
Job posted by
Shobha B K picture
Shobha B K
Apply for job
apply for job
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Want to apply for this role at Cognologix Technologies?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done